2016-03-27 GnuCash IRC logs

01:06:33 <jralls> jscinoz: Looks like tomorrow. The Windows build has been running 21 hours and is still not finished. But the only rendering issue that I know of is on Gentoo and is easily fixed by the user.
01:20:12 <jralls> warlord: The problem is likely not code, nor the winxp VM. "VMWare Infrastructure Web Access" is what the page at https://vmhost.ihtfp.org/ calls itself, and you said this afternoon that a process called WebAccess was at the top of top on the host OS. Maybe that's a problem. At this point I can't even get it to log in.
01:20:17 <jralls> warlord: Maybe you're under attack.
01:37:02 <jscinoz> jralls: I'm on gentoo; what is the fix for the rendering issue? I might have missed it
02:01:10 *** Mechtilde has joined #gnucash
02:44:23 *** rubdos has joined #gnucash
02:58:28 *** husker has joined #gnucash
03:33:42 *** mib_57d9ss has joined #gnucash
03:34:59 *** mib_57d9ss has quit IRC
03:38:12 *** bernard has joined #gnucash
03:39:02 *** bernard has left #gnucash
03:45:32 *** ecocode has joined #gnucash
04:07:50 *** fabior has joined #gnucash
04:18:24 *** Mechtilde has quit IRC
04:18:48 *** Mechtilde has joined #gnucash
04:32:27 *** fell__ has quit IRC
05:54:26 *** ecocode` has joined #gnucash
05:54:26 *** ecocode has quit IRC
05:56:24 *** Jimraehl1 has left #gnucash
05:57:26 *** Jimraehl1 has joined #gnucash
06:06:09 <warlord> finster: Hmm: this is bad, right?
06:06:10 <warlord> %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa,100.0 hi, 0.0 si, 0.0 st
06:08:03 <finster> warlord: yes, pretty
06:08:15 <finster> the system must be pretty unresponsive
06:08:26 <warlord> It's not too bad locally.. but load is 33
06:09:01 <warlord> Right now I'm thinking about how I could free up some of my HDD slots in the server; shouldn't be too hard to do.. But it will require some downtime (and the purchase of a pair of 2TB SSDs)
06:09:24 <finster> how does the cpu usage of the host compare to the guest? is the host still showing no signs of cpu wait?
06:09:30 <warlord> From the host:
06:09:30 <warlord> Cpu(s): 21.6%us, 18.8%sy, 0.0%ni, 58.2%id, 1.3%wa, 0.1%hi, 0.1%si, 0.0%st
06:10:02 <finster> before actually spending the money I'd be 100% sure it's the disk I/O. the host OS is not having significant i/o wait. so that's a bit odd
06:10:16 <warlord> okay...
06:10:18 <finster> did the performance problems arise gradually or did they start at some specific point of time
06:10:18 <finster> ?
06:10:21 <warlord> Hard to say.
06:10:23 <warlord> They seem to come and go.
06:10:32 <finster> is there any performance monitoring in place?
06:10:38 <warlord> There is definitely a correlation with high disk-io in one VM ..
06:10:38 <warlord> "performance monitoring"??? hahahaha
06:10:47 <finster> got me there :)
06:10:47 <warlord> No, generally it's limited to me watching "top"
06:11:04 <finster> okay, if you're sure the performance problems correlate with i/o operations on one vm (maybe iostat/iotop can help confirm this) new disks are the way to go. as to moving the storage: do you use LVM?
06:11:16 <warlord> yes, the underlying system on the host is LVM..
06:11:34 <warlord> /dev/mapper/VolGroup00-LogVol02
06:11:34 <warlord> 7.2T 1.6T 5.3T 23% /vmware
06:11:51 <finster> okay. depending on the distribution of disks/data/lvs you could use pvmove to move data off disks and remove that disk from a volume group later
06:11:53 <warlord> My plan would be to reduce this, remove a pair of 2TB spinning disks, replace them with SSD, and create a new LV to house /vmware
06:12:01 <finster> oh, damn, vmware.
06:12:01 <warlord> It's Linux
06:12:02 <warlord> It's a Fedora-based system
06:12:19 <warlord> But I can't pvmove -- I have to e2fsresize first
06:12:22 <warlord> (all my extents are used)
06:12:39 <finster> yes, you'll need to shrink the filesystem then
06:12:49 <warlord> I've got the recipe. fsck, resize2fs, lvreduce, resize2fs, pvmove, pvremove
06:12:49 <finster> sounds good
06:12:50 <finster> probably e2fsck -f
06:12:56 <warlord> yes
06:12:58 <finster> you'll probably want to have a backup handy
06:13:03 <warlord> Yep.. It will take a lot of downtime to do all this.
06:13:17 <warlord> (in my copious amounts of free time)
06:13:41 <finster> i'd be willing to assist, if needed. however, you proably do not want to entrust the infrastructure to a random internet stranger :)
06:13:46 <warlord> I suppose I could do most of this even before I purchase any SSDs
06:13:56 <warlord> Thank you; I appreciate the offer. Not sure how that would work ;)
09:49:06 <warlord> It is clearly something that will need to be scheduled, since it will take a lot of downtime.
09:49:12 <warlord> I'm not willing to do this with the system running.
09:49:17 <warlord> (even though, technically, it would probably work)
09:49:46 <finster> enlarging filesystems is quite stable on a running system in my experience. but shrinking is a completely different story. so, I'd second in doing the operation off-line
09:50:04 <warlord> Yeah, I've enlarged online several times.
09:55:29 *** jesse has joined #gnucash
09:57:34 <warlord> I'm going to kill HTTPD on code for a short time...
10:05:53 <warlord> Well.... That helped...
10:06:03 <warlord> %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 5.3 id, 35.1 wa, 38.3 hi, 21.3 si, 0.0 st
10:06:53 <warlord> Let me see if the system will stabilize a bit; and then I'll restart httpd
10:07:30 <finster> mmh, depending on the looks of the access logs maybe the server is under attack, as someone suggested yesterday...
10:08:11 <warlord> Either that or it just got bogged down and couldn't respond to requests fast enough?
10:08:31 <finster> both options are possible. but it's a bit stabbing in the dark with out data (i.e. logs)
10:08:33 <warlord> It is still sitting in a lot of disk-wait.
10:08:42 <finster> any chance of installing iotop?
10:09:45 <finster> some process must still be chewing on the disk
10:10:10 <warlord> Maybe the system just needs a reboot...
10:10:12 <warlord> (the VM)
10:10:23 <finster> mmmh, yes, that might be an option. otoh, if it's a malicious process that may be gone after a reboot
10:11:22 <warlord> True.
10:11:43 <warlord> The top processes are trac-admin (running a backup), freshclam (running an update)...
10:11:55 <warlord> (in terms of CPU usage)
10:12:03 <warlord> Still trying to get iotop
10:13:25 <finster> however, I need to leave for family business
10:14:03 *** finster has quit IRC
10:15:02 <warlord> okay. thanks.
10:23:32 *** finster has joined #gnucash
10:24:32 <jralls> jscinoz: See https://bugzilla.gnome.org/show_bug.cgi?id=763279.
10:28:39 <jralls> warlord: Can you see if the release build finished?
10:29:37 <jralls> warlord: Never mind, I managed to get into the win32 VM. It's building aqbanking.
10:30:14 *** fell has joined #gnucash
10:31:05 <warlord> jralls: Ah, glad you got in..
10:32:36 <warlord> I've got to go do some yard work... So I'll be AFK for a while. I'll restart httpd later.
11:04:16 <jralls> warlord: OK.
11:52:15 *** fabior has quit IRC
12:07:58 <warlord> Restarting httpd
12:08:04 <warlord> Looks like the major process is a trac backup.
12:08:18 <warlord> I'm wondering if I still need to run that process?
12:08:50 <jralls> warlord: Seems unlikely since we don't use trac anymore.
12:10:56 <jralls> But it also seems unlikely that that would soak code the way it's been the last few days, and especially yesterday.
12:19:38 <jralls> Did you actually restart httpd on code? I'm still getting "Unable to connect" from Firefox.
13:25:26 *** fell_ has joined #gnucash
13:27:34 *** fell has quit IRC
13:54:45 <warlord> I did; but apparently the restart failed.
13:54:52 <warlord> I restarted it again, and now it should be up
13:55:11 <warlord> Of course it's back to it's 100% 'hi' status :(
13:55:19 <jralls> Yeah, it made the connection but the page isn't getting served.
13:55:29 <jralls> Do you have a firewall? It sure feels like you're getting attacked.
13:58:49 <jralls> 10 minutes and the page still isn't served. You might as well shut httpd back down.
14:05:52 <warlord> jralls: Maybe I should just reboot...
14:06:33 <jralls> Just code, right? The win32 VM is finally working on GnuCash.
14:08:16 <jralls> The page on code was finally served about 2 min ago, 15 min total wait.
14:12:50 <warlord> Yeah, just code...
14:13:27 <warlord> jralls: I need to run... so either need to reboot now or not until ~7-8pm ET
14:14:07 <jralls> Go ahead and reboot.
14:16:54 <warlord> ok.
14:17:02 <warlord> rebooting
14:18:49 <warlord> I suspect it'll be a bit until it comes back.
14:20:35 *** gncbot has joined #gnucash
14:22:39 <finster> re.
14:22:54 <finster> so no further insights from the apache logs?
14:45:01 *** gncbot has joined #gnucash
14:45:23 *** jralls sets mode: +o gncbot
14:47:30 <jralls> Looks like warlord killed httpd...
15:26:40 *** mlncn has joined #gnucash
16:14:15 *** Unhammer has quit IRC
16:24:03 *** Mechtilde has quit IRC
16:50:08 *** rubdos has quit IRC
17:08:57 <warlord> jralls: no, I didn't. It just didnt start on reboot... Starting it now.
17:09:30 <jralls> warlord: OK, but kill it if the load goes crazy.
17:12:14 <warlord> Will do. Right now load is 3.
17:12:31 <warlord> (the system HAD been up for like 117 days or something like that)
17:22:00 *** jesse has quit IRC
17:29:56 <jralls> OK, but it's not running Windows. How long has that VM been up, BTW?
17:32:12 <jralls> Web response on code is reasonable compared to the last few days.
17:58:51 *** husker has quit IRC
19:03:47 *** jesse has joined #gnucash
19:07:12 *** mlncn has quit IRC
19:09:31 *** Unhammer has joined #gnucash
19:27:49 <warlord> jralls: Yeah, load s under 1.
19:28:15 <warlord> Probably on the order of the same... It doesn't auto-reboot..
19:28:30 <warlord> So it could be as long as 142 days.
19:28:38 <warlord> You're welcome to reboot it..
19:29:36 <jralls> That's a really long time for Windows, especially XP. I will, after the build finishes. 40 hours so far, a new record!
19:31:08 <warlord> Lovely!!
19:31:31 <warlord> Go for it. Like I said, there's not any auto-reboot job in place
19:34:36 <jralls> Yup, i42 days, 5 hours, 52 minutes.
19:34:46 <jralls> According to systeminfo.
19:38:54 <jscinoz> Thanks jralls, I applied that patch on my machine and reports work once more :)
19:39:22 <jralls> jscinoz: Very good.
19:39:47 *** ecocode` has left #gnucash
20:02:07 *** minot has joined #gnucash
20:25:42 *** mlncn has joined #gnucash
20:33:20 *** mlncn has quit IRC
20:38:39 <warlord> not surprising..
20:42:51 *** jesse has quit IRC
21:13:11 *** mlncn has joined #gnucash
21:21:10 *** mlncn has quit IRC
22:15:37 *** mlncn has joined #gnucash
22:23:44 *** mlncn has quit IRC
22:46:16 *** jesse has joined #gnucash
23:39:09 *** jesse has quit IRC