My home automation system

Last updated 7/24/2011

 

(This page is just kind of a starter.  Lots more details need to get filled in.  A better description is being started on my Project Notes pages here.)

 

The system that collects the data displayed on the front page of this web site has been under development since late 2008 (with no end in sight :-)  Its main purpose is to let me see how the house is doing when I'm not there – especially when we're on vacation.

 

Collection/control network

There are currently about 6 data collection/control nodes on an RS-485 network running all around the house (and out to the garage) at 57Kb over Cat5 cable.  Three pairs carry unregulated 12VDC to power the nodes; the other pair is the (half-duplex) data.  Everything is just wired in parallel.

 

The nodes run PIC 16F628s.  Some talk I2C to LM75 temperature sensors, some just sense data bits, as with the rain sensor and sump cycle sensor.  Some have output bits that can control stuff.  Ones that have wires that could get ground referenced (like water sensors on the basement floor) have opto-isolators between the external sensor and the PIC.  The one that reads the water meter is just a dumb bit in, but there's some significant hardware optically sensing a spinner on the meter and coming up with a clean logic level.

 

The master/slave protocol on the wire is locally designed since it is not expected to talk to anything else.  (Sounds like famous last words.)  It's basically a sync byte, from addr, to addr, opcode, length, data, checksum.  The master polls each of the nodes (currently once/min) and the addressed node replies with its latest info.  Data – both length and meaning - is device dependent, so the master program has a decode callback for each device – identified uniquely by its address.  The protocol allows the master to send data to the slave nodes so it can use them to control external devices.

 

Currently two nodes use that control capability:  the router/modem power cycler and the sprinkler controller.  The former gives the computer a chance to try to fix loss of connection to the internet by power cycling the DSL modem and main router, each of which occasionally gets confused.  The sprinkler controller lets the main computer control 8 channels of 24V valves that control watering of various parts of the landscaping.

 

Main collection program

Finally moved to a 5 watt pink Pogoplug from its earlier home on the main PC, the main control program is written in perl.  It's responsible for polling the slaves, deciding when to give up on them, collects info for 5 minutes, then tries to send the latest to the web site.  It can send text messages to my phone for serious problems like sensor failures, extended power outages, water where it shouldn't be, etc.  Some details of the migration to the Pogo are here.

 

Control of the sprinkler system is driven by a perl module that reads a crontab-style control file.  Real Soon Now (this coming watering season, I hope!) the program's knowledge of rain and temperature and a new soil moisture sensor will let it modulate the watering intelligently.

 

There is a little web server on the Pogo, and it would be nice if I could serve the main stats page from there as a backup to the web host.  No progress on that yet.  The web server (and sshd!) are visible on non-standard ports from the Internet.  I'm using a dynamic DNS mechanism for that.

 

UPS

The whole system – and especially its ability to sense and report power failures obviously doesn't work well if the computer or network/internet connection go down if power goes out.  The UPS is a big, old, ugly sine wave UPS/power conditioner retired from work.  The two 12V gel cells that used to live inside it have been replaced with two large marine deep-cycle batteries sitting next to the UPS.  It's ugly, but it keeps the computer and router and modem up for maybe 3 hours of power outage.

 

Unfortunately, the UPS doesn't have much intelligence.  It has a contact closure when it shuts down, but that's not a useful warning.  On the list is a battery voltage monitor that should let me shut the main PC (the biggest power hog) down gracefully while there's still lots of battery left to keep the Pogo, modem, and router up.  That monitor – which needs to be opto isolated from the network, since there's voltage relation between the battery terminals and the output AC – will talk to the PC over the 485 network.

 

Interestingly, the batteries are protected from lack of maintenance by – my backup sump pump!  That has a sensor and an alarm for "needs water" which beeps every couple of months.  As long as I have water and tools out to fill that one, I always fill the UPS batteries as well.

 

File transfer

The update data sent to the web site consist of one ascii line per sensor per sample.  The main perl program attempts an ftp of the latest updates every 5 minutes.  If it fails, the updates are remembered (in a file) and all will be sent when it next succeeds in connecting.

 

With both my previous and new hosting provider, there are occasional ftp failures which truncate the ever-growing file on the web host, losing all previous info.  Since the web software expects a startup header in that file, the web site can't show data any more.  I should make some kind of transaction based update mechanism.  Currently I have to do manual intervention to push some data from the host machine here up to the web host.

 

Alerts

The main perl program currently can send me a SMS message by sending an email to my phone text address.  It alerts me if a sensor fails to respond 5 times in a row or if there's a power outage.  When the water sensors are in place, if they find undesired water that will certainly send an alert, too.  Note that this is dependent on my internet link being up!  I've vaguely considered a dial-out backup, but it isn't likely.

 

Ping stats to ISP

While it's not part of the 485 network, a separate perl program does a ping to the router at my ISP that my DSL modem connects to every 5 seconds and keeps round trip time stats.  This is mostly to detect whether the internet connection is OK.  It should go to a few other stable sites outside my ISP to get a better view.  The stats from that bit of perl are also collected by the main program and sent to the web site so I can see how connectivity has been.  The collection program notices when it thinks connectivity is bad – currently 20 consecutive missed pings – and sets a flag that tells the main perl program to tell one of the 485 nodes to power cycle the modem and router.  There's some detail here.

 

Web site software

I found some php software that can produce graphs with auto scaling, etc.  Some more php reads the datafile and gets the data to feed the graphs.  There's still a bug in the watermeter graph that doesn't deal gracefully with restarts and shows sometimes huge negative water usage spikes.  I've gone after it twice and it still occurs.  The datafile analyzing software bucketizes the one-minute data into something that makes for a smoother graph, and I suspect there's an interaction between the sampling boundaries and the bug.  Someday…

 

There's a watchdog that alerts me if the web host doesn't get an update for a while.  The web host doesn't really allow that (kills processes running longer than some threshold).  The watchdog gets around that by forking a new instance of itself (with a new pid!) every 8 minutes or so.  It emails me when the file isn't updated for a while.  It runs successfully for months on end.

 

I just recently succeeded in using wget on the web host to get a nice ascii table of rainfall as recorded by the USGS at Salt Creek here in Elmhurst.  That data is taken every 5 minutes – just like mine.  I'd very much like to integrate it with the current rainfall info from my hardware as another line on the rainfall graph.  That would provide both a sanity check and a good second source of data for when my hardware clogs up.  Someday…

 

I would really like to change the back end to maybe a real database and do periodic data compression/purging of old data.  (I'll keep my one minute data for a while, but maybe reprocess to hourly (average + peak?) after 2 weeks or a month, and maybe daily after 6 months.  I still have the original one minute data on the PC at home.)  My new hosting provider (GoDaddy) supports mysql, so that makes it a little easier.  And maybe I can even get around the ftp failures/file truncations with transactions to the db!  The Round Robin Database tool looks pretty interesting, but I don't know if it will run on either the web host or the Pogo.

back to home page

…end…