In the process of building out my network intelligence system I need to have a central location to collect system and event logs on my network.  Since my ReadyNAS has Linux under the hood I figured what better place (since it has plenty of space to store LOTS of logs).  Here is what I did.

First, you need to have a a ReadyNAS with OS6 on it.  In my case I have one of the older ReadyNAS Pro 6 boxes which only officially support the older 4.x OS.  But, there is a very easy way to upgrade to OS6 and it has been very reliable for me.  Down side is that it will require wiping out all data on your NAS and reformatting (Backup, Backup, BACKUP!).  I believe it’s well worth the hassle of backing up and restoring data to get this upgrade.  It will void your warranty (or make it much more difficult to get through tech support), but it appears that Netgear has been reasonably responsive in adding fixes for the unsupported legacy hardware.  Once my NAS was converted updates have been easy and automatic.  Anyways, here is the info I followed to convert:  ReadyNAS Forums

Now to setup syslog (rsyslog) to receive incoming logs on your network do the following:

  1. Log into your NAS and enable SSH
    1. Go to System -> Settings -> Service -> SSH
  2. Create a new folder to store/share your logs
    1. Go to Shares -> Choose a Volume (or create one)
    2. Create a new Folder (call it logs?) and set permissions as you like
  3. Create a new group
    1. Go to Accounts -> Groups -> New Group
    2. Create a new Group (call it logs?) and set permissions as you like
  4. Go back to your new “logs” share folder and set permissions such that the “logs” group has read/write perms
    (These are very liberal permissions and basic groups/users, you can go much more restrictive, which I would recommend once you’ve got the basics working)
  5. Now ssh to your ReadyNAS as root using the same password as your web based admin account
  6. Install rsyslog
    1. apt-get install rsyslog
  7. Configure rsyslog
    1. vim.tiny /etc/rsyslog.conf
      If you don’t know vim go read-up first, you need to know how to insert, delete, and save
    2. Change the following lines:

      Remove the # signs in front of these lines at the top:
      $ModLoad imudp
      $UDPServerRun 514
      $ModLoad imtcp
      $InputTCPServerRun 514

      Add the # sign to these lines:
      #*.*;auth,authpriv.none -/var/log/syslog
      #cron.* /var/log/cron.log
      #daemon.* -/var/log/daemon.log
      #kern.* -/var/log/kern.log
      #lpr.* -/var/log/lpr.log
      #mail.* -/var/log/mail.log
      #user.* -/var/log/user.log -/var/log/
      #mail.warn -/var/log/mail.warn
      #mail.err /var/log/mail.err
      #news.crit /var/log/news/news.crit
      #news.err /var/log/news/news.err
      #news.notice -/var/log/news/news.notice
      #            auth,authpriv.none;\
      #            news.none;mail.none -/var/log/debug
      #             auth,authpriv.none;\
      #             cron,daemon.none;\
      #             mail,news.none -/var/log/messages

      And add these lines to the bottom:
      $template RemoteLog,”/data/logs/%$YEAR%/%$MONTH%/%fromhost-ip%/syslog.log”
      *.* ?RemoteLog

    3. Be sure to change the /data/logs part to match with your volume and folder you created in steps 2 above
  8. Now enable and restart rsyslog
    1. systemctl restart rsyslog.service
    2. systemctl enable rsyslog.service
  9. Check to make sure rsyslog started happily
    1. systemctl status rsyslog.service
    2. tailf /data/logs/2015/03/
      You should see something like this “rsyslogd: [origin software=”rsyslogd” swVersion=”5.8.11″ x-pid=”24127″ x-info=””] start”
  10. Log out of SSH and disable it if you don’t need it anymore.

That should cover the basics.  By default the ReadyNAS will log as from an IP of, all other hosts will log from their IPs on your network.  There is of course a lot more custom configuration you can do.  This is just the basics.  You will also be able to view your logs from the shared volume you created.

I commented out a lot of lines above to avoid duplicate logging in the /var/log directory as that’s only about 4GB in size.  You can always re-enable them and change there path if you choose.


I’ve been very busy updating my home network infrastructure lately.  I wanted to improve the zone separation, while at the same time providing a reasonably secure connection between my resources at home and my resources on the net.

Some of these changes include:

  • Replacing my SSG-140-SH Firewall with a new SRX220H2 w/POE Firewall.
  • Replacing my DELL 5448 Switch with a new Netgear GS724T Switch.
  • Removing an old 4 port POE switch.
  • Replacing my old VLAN setup (Main, Media, Utils) with my new VLAN setup (Main, Wireless, Media, Utils, LAB, VPN, Tunnel).
  • Upgrading my old Dell 860 (250GB Raid1 and 8GB RAM) co-located server with a new SuperMicro based server that has 12TB of storage and 32GB of ram.  This is split into virtualization images, so I’ll be able to work with Docker/CoreOS/KVM based technologies in my personal cloud.  This is tied into my home network via an OpenSwan -> SRX IPSec tunnel.  Additionally, the SRX will be able to provide dynamic SSL VPN capability for when I’m on the road.

All of the above gets added to my existing 12TB NAS, multiple POE wireless access points, and virtualization server.

I have a few more tweaks left to handle multicasting and cross-LAN traffic on the network, finishing up my log aggregation and analysis tools, as well CoreOS and Docker work for PaaS deployments.  This should provide some nice resources for my security research.

I wanted to find a way to easily charge a couple of AA and AAA batteries from a solar panel for camping, hiking, and geo-caching.  Thought it would be nice to charge via the sun vs carrying around extra batteries charged up from the grid.  Turns out it wasn’t as easy as I had hoped, and yes, the solution involves pulling out the soldering iron, see below.

Finding a solar cell was actually pretty easy, doing some looking around I found this Anker 14W Portable Panel on Amazon:

Anker 14W Solar Panel

Cheap at about $50 and a full 14W with two USB ports.  All I needed to do was find a USB powered AA/AAA charger.

Yeah, sure, no problem…

So, after a LOT of searching turns out about the only good one I could find was the Guide 10 Plus charger by Goal Zero:

Goal Zero Guid 10 Plus Charger

One big draw back, it’s designed to work “best” with their own 7W solar panel, which costs more than the Anker for half the wattage.  They say that it will charge in 3-6 hours using their special connector to their solar panel, or 6-10 hours from a USB port.  It seems they put in a charging limiter on the USB in port (likely lower allowable current) vs the special solar port.

So what to do?  Build my own special solar cable that will allow USB to charge to the solar port on the battery charger instead of the USB port on the battery charger.  Two things to worry about, simulating the proper voltage and current on the solar port and having the right size adapter.  Taking some measurements I found that the solar port seemed to be a pretty standard 2.5mm x 0.7mm dc jack (High Speed USB 2.0 to DC 2.5mm Power Cable for Mp3 Mp4).  To handle the power issues I noticed that the box and literature stated that the solar port input specs were 6.5V at up to 1.1 to 1.3A (depending on which document of Goal Zero you read).  Standard USB is 5V at 2A (standard 2.0), so just needed to convert this to the required solar port specs.  To accomplish this I did some searching and found this:

Pololu Adjustable Boost Regulator - Converter

This boost regular can take in the 5V 2A from USB and using a small screwdriver I was able to adjust the trimmer potentiometer to a measured 6.5V ~1.1A output.  My cable looked like this after my soldering work:

Back of Converter Soldering Converter and USB Plug Front of Converter Soldering

With a little bit of electrical tape to cover up the sensitive parts I had this:

Finished Custom Cable

At this point there was only one thing left, to cross my fingers hook it up and give it a shot (oh and I did run this by an Electrical Engineering friend of mine first to make sure my plans were sound given how long it’s been since my college electrical engineering classes.  He approved and gave me an A- on the soldering job).

And it worked! Not only did it work, with the 14W panel and the regulated 5v 2A from that, I got faster more consistent charging times than the Goal Zero setup.  I know this because, shortly after buying the 14W panel and all my parts to build my own charger an incredible deal came up to buy the Goal Zero 41022 Guide 10 Plus Solar Recharging Kit
which included the 7W panel and another USB/Panel AA/AAA battery charger, plus mine came with the portable Rock Out speakers.  It was a VERY good deal or I wouldn’t have done it.  But it made for some great testing and comparison.

So happy and successful hardware hack!  And now I have two very effective portal solar powered battery charging systems.  The Anker based one for heavy lifting and fast strong charging of USB devices and batteries.  The Goal Zero for flexibility (USB, 12Volt, and Solar Port) and lightness (but slow charger).

The final Results:

Anker Solar Panel, Custom USB Cable, Goal Zero Guide 10 Charger

So, recently I started looking to see if there was any nice hardware around to provide a solid enclosure for a Freenas based home made NAS storage system.  In looking into this, I ran across this page: Freenas Raid Overview.  What really caught my eye was the statement “CAUTION: RAID5 “died” back in 2009″ and a link to this article: Why RAID 5 stops working in 2009.   Worried that I had made a fatal error in my existing 12TB (6x2TB) RAID 5 setup, I read on and realized something wasn’t right.  And it got worse; a follow up article in 2013 Has RAID5 stopped working by the same author continued on in error.  “What’s the problem?” you might ask.  Well, it is a failure to understand fundamental math.

See, the author (and, to be fair, lots of people) makes a mistake when looking at probability of separate events added together.  They make the assumption that if you have six separate events each with a given probability of happening, and you put them all together, then as a whole you’ve increased your chance of that event happening.  That’s completely wrong.  Your overall probability is no greater than the individual probabilities.  Each individual event has no effect on the other events.  So since you have six 2TB disks with a max URE failure rate (probability of failure to read) of 1×10^14 you are still only looking at the failure of that 2TB disk, not of the 12TB of storage.  If you really want to try to account for combined events, you can take the chances of having two drives fail with URE at the same time.  This is done by multiplying the events together.  So 1/(1×10^14) times 1/(1×10^14) equals 1/(1×10^28) probability of failure, that is a URE of 1×10^28!  All failures probabilities are completely independent.  And it gets better from there:

1.  With the probability and statistics error stated above, you are only looking at the chance of failure for each individual disk, not the whole storage array.  So you have a 1×10^14 probability of a read failure for a 2TB disk during the recovery of any disk.  Yes, this technically gets worse as drive sizes increase, but you would need to read each individual, COMPLETELY FULL 2TB disk, in whole 6.25 times (for the needed 12.5TB of data) to hit this probability of failure point on that disk.  For a 4TB disk you have to read the entire full disk 3.125 times, so worse odds, but in most setups this still is unlikely to occur during a rebuild (unless you’ve just got bad luck).

2. That 1×10^14 is the MAX unrecoverable read error rate.  That means that you should get no more than that number of failures.  You are actually likely to get less than that number of failures, so can expect to be able to read more data than 12.5 TB before a failure. See, more good news!

3.  When RAID 5 is in recovery mode, you are not reading a full 2 TB of data off your full 2TB disks to rebuild your failed drive.  The parity information to recover the drive is only the total usable storage divided by the number of drives in the array.  For a 2TB x 6 array (12TB of raw storage) you get 10TB of usable storage.  That 10TB is divided by 6 to give you about 1.67 TB of data needed to be read off each individual 2 TB drive to recover the failed drive in the array. So, again, your odds get better.

Yes, the chance of failure does go up as drives get larger (assuming URE doesn’t improve), and, yes, you should ALWAYS have offsite (a different raid box) backup for anything you don’t want to risk losing (good disaster recovery strategy anyways).  But RAID 5 isn’t dead and is still an excellent choice for good performance, reliability, and cost.

And here is my real life example:  I made the mistake of purchasing Seagate “green” 2TB drives for my original 6x2TB NAS box.  These drives have a little bug, they report “failed” even when they haven’t really failed when they are used with some hardware raid solutions.  For 4 months after I installed these drives, I had a drive failure just about every three weeks and had to do a rebuild of 5TB of data (take failed drive out, format it blank, stick it back in, rebuild).  That’s about five RAID 5 rebuilds before I finally gathered the funds to replace all the drives with WD red NAS drives (no failures since).  Oh, and each time I swapped out a red drive for a green drive, another Raid 5 rebuild, so six more rebuilds for a total of eleven.  Guess what, I got lucky and there were no URE events during any of those rebuilds and no data was lost (yes I have off site backup as well).  Of course when I say luck, I mean my odds were pretty good I wouldn’t have a catastrophic failure as the other author claimed I would.  😉

So I finally took the time and got up and running on IPv6. I’ve had the addresses for a while and getting Linux up and talking IPv6 is pretty straight forward. All you need is to add some lines like these to your ifcfg-ethX file:


And of course, can’t forget to setup ip6tables to match what iptables is blocking!

Getting Apache up on it was a little more fun. I’ve got some virtual hosts spread about so I basically had to find every reference to my sites IP address and duplicate all relevant configs, swapping the IPv4 addresses (like with a bracketed IPv6 addresss (like [1922:1::1]). Examples would be:

Listen [1922:1::1:2]:80
NameVirtualHost [1922:1::1:2]:80
VirtualHost [1922:1::1:2]:80

What was the real bear was WordPress and plugins. See once I had this all setup and running for Apache, Apache wanted to talk to the world via IPv6 (IPv4 is still there, just less favored)! Of course, WordPress and akismets servers don’t do IPv6 and things broke. To fix a lot of this I had to enter in /etc/hosts entries specifically for wordpress and akismets servers. Here are some examples of my entries:

UPDATE The below are no longer needed and will break things, can be added for feed news

With those in the hosts file, my system now defaults to IPv4 when those plugins try to do their behind the scenes checks. I also had to update the Dashboard news feed to the updated URL which apparently changes since it was added to my WordPress install (they use a redirect on their server which again fails with IPv6).

After all that it’s now up and running. Next will be tackling postfix and email over IPv6, but that’s for another month…

For years now I’ve used telnet as a quick and easy way to check to see if the most basic network functionality of a service like http is working. I.e. I telnet to port 80 and see the raw server communication. Very helpful in debugging network services. Where it fails is when you get into SSL services. Telnet to port 443 and sure you’ll see you connect, but your not going to be doing an SSL handshake.

So I finally did a little googling and ran across this gem:

openssl s_client -connect

And now I have SSL handshake and my raw plaintext interface that telnet provided.

Works great for all my ssl service troubleshooting (imap/pop/https/etc..).

Found the info at this site:

Ok this has been bothering me for a while, I upgrade my desktop to CentOS 6 to have a nice stable platform going forward from my previous Fedora 14 install and all was good.  Except Enigmail gpg passphrase caching broke.  Every time I hit an encrypted email I had to enter in the passphrase at least twice it seemed, and pity me if i clicked on a threaded email conversation.

So after digging around I found the following fix:

Edit .bash_profile and add:

gpg-agent --daemon --enable-ssh-support --write-env-file "${HOME}/.gpg-agent-info"

if [ -f "${HOME}/.gpg-agent-info" ]; then
. "${HOME}/.gpg-agent-info"

Edit .bashrc and add:

export GPG_TTY

And now all is happy.  Some of this was found on this page:

Some of it was trial and error, plus a health amount of googling.

So it’s been over two years since my last post.  Been very busy in my life and haven’t had time to do as much tinkering and computer stuff at home as I usually would.  That’s not to say I haven’t done anything, just haven’t documented it.  Here are a few things that happened in the last two years:

  1. I changed jobs, I now work in computer, network, and systems security full time.  I’m loving it!  Finally getting to really practice what I preach in the security field.  Georgetown was fun and a great time to grow my general systems experience, but I’m enjoying the focus on computer and network security.

  2. Got a new car, this actually happened about three years ago, but I never posted about it.  The Chevy Blazer was taken out by it’s imploding supercharger and deemed not worth my time, effort, and money to repair.  Given it was early 2009 and car dealers were giving away cars I got a great deal on a new 2009 VW Tiguan SE with AWD.  Still love the car and making small upgrades to it as the years go on to make it more mine.  I did actually stand up a page for that work here: My SUV Project (Tiguan).

  3. I made some network and computer upgrades at home as well.  I replace my original first generation MacBook Pro 15″ (Intel Core Duo 2Ghz) with a late 2010 model MacBook Pro 15″ (Intel i7 Dual Core) with HD display and 8GB of ram.  It’s currently triple booting MacOS X 10.6, Fedora 16, and Windows Ent 7.  I have a post on how to setup triple boot in the works.  I also upgrade my old Promise NS4300N 2TB NAS box with a new NetGear ReadyNAS Pro 6 12TB.  Much faster and a lot more storage plus so many options.  Finally I’ve kept the network up with technology and run full WiFI a/b/g 300mbps+ and GigE wired via NetGear WNDR4000 and assorted GigE switches paired with FiOS internet.  Finally I upgraded my workstation piece by piece to get it up to a Sandybridge i7 and 16GB ram so that I can build out a new HD+CableCard MythTV network using VMs, the NAS box, and the new Silicon Dust HD Prime. I’ll have a post later documenting my network general gear later as well as posts on how I setup MythTV.

  4. I’ve got a Barnes and Noble Nook Color as well.  It’s a great little device and hoping to take better advantage of it this coming year.  And yes, it’s rooted.  Running stock Nook Software but with the added benefit of sideloaded and standard android market apps too.

  5. And last but not least, still being a dad and husband working away enjoying watching the kids learn and grow (as I learn and grow).


Well after 2.5 years I just turned in my application to graduate from my Masters in Computer Science program at Georgetown University.

I started the program in the Fall of 2007 with my first class, Information Retrieval (Basically Search Engine and Data mining technologies).  Some of my favorite classes included Network Security, Information Warfare, Requirements Engineering, and Service Oriented Architecture.  Finished my studies up with an independent study revolving around Privacy and Information Control for Fall 2009.  Basically a cross between Information Warfare, Information Retrieval, and the privacy implications, with a little Java programming thrown in.

All grades are in (I did very well, even with being a new Dad to kids during this time period, thank you wife!), so the rest is just formalities.

Now I can get back to more of my volunteer and independent work as well as hobbies.

I decided it was a little much having two “netbooks” around, so I sold my trusty Sharp MM20 (a netbook that came out before anyone heard of netbooks) to another MM20 owner with all the accessories.

So I’ve dedicated myself to the Acre Aspire One and it’s doing a great job.  One complaint was the horribly slow 16GB SSD drive that it came with.  It’s pitifully slow and loading a full blown Linux distro on it started showing its shortcomings.  Well this was solved by replacing the drive with a better performing RunCore based SSD drive.  Now the machine is quick and responsive.

I’ve loaded up Fedora 12 on the machine with “Desktop Effects” enabled, SELinux enforcing, and an encrypted hard drive via dm-crypt.  In truth, I notice no performance loss, it’s quick responsive and no stuttering.  Works great for Web Browsing, SSH sessions, and email.  That’s all I really need from a Netbook.  Oh and 5 hour battery life is no problem for this little 2.5lb machine.

Next Page »

Copyright © 2015 · All Rights Reserved · Cafaro's Ramblings