Sunday, December 16, 2012

Integrating the TMP102 sensor with the datalogger, and I2C bus lengths

The TMP102 is an I2C sensor available on a breakout board from Sparkfun TMP102.  It's pretty cheap, under $10.  It will actually allow you to have multiple sensors on the same I2C bus just by jumpering an address pin.

I'm working on creating a software module to decode the output from the sensor.  So far it isn't straight forward, at least for an amateur.  If I understand correctly, the sensor reports the readings in a backwards fashion.  I'm not sure I've decoded the output correctly, but I'm close.  Once I get that ironed out, I'll post the code up.

I started playing with I2C bus lengths.  I heard that the practical length was less than a meter.  I've got around a 2-3 meter CAT-5 cable set up running the TMP102.  It's dangling outside the window, inside a tupperware container, and appears to be working just fine.  I'm using Sparkfun RJ-45 Jacks and breakout boards.  It adds a couple of bucks to the cost of the project, but it makes for great modularity.  It's also going to let me do some cable run tests.  Eventually I'll post those up as well.

I modified the jacks and breakout boards slightly.  In place of a simple header, I used Arduino 8-pin stackable headers.  That way, I can easily add a second sensor (or more) at the cable end, while still being able to easily breadboard it.  So far it's working great.

The goal with this setup is to have two sets of environmental sensors.  One on the Raspberry Pi itself, and the other just slightly remote from the Raspberry Pi.  This sensor set could be outside a window, inside a terrarium/fish tank, inside a science project (fermenter? soil analyzer? hot water heater?), all kinds of possibilities.

If I can make the software scale-able, anyone could add all kinds of sensors to the system and have it easily recordable.  The only drawback would be writing new sensor drivers for each set of sensors.  That's where I'm having a fair amount of difficulty now.

It looks like I was wrong when I figured out the size of the data files.  Right now, the datafiles appear to be much smaller, despite my cramming more information into them (Temp1, Temp2, Pressure, Motion Sensor, Time, Log Level).  I think I want to change the logging system even further so that it creates a better record of values.  Right now I'm saving a string for each sensor cycle with just the sensor values.  The software is written so that it assumes a certain value is in a certain order in the string.  That's great, as long as no one else is adding sensors to the system, and you only look a certain way.

I'm wondering if I can create a log with each sensor cycle recording a dictionary.  Then I could easily write code to examine which sensors were polled for each sensor cycle.  I could easily pull all the sensor values, or some of the sensor values.  I could have some sensors polled more often than others.

I need to grab a second Raspberry Pi, and start thinking about how to pull all the sensors nodes together, into one easy to read display.


Saturday, December 15, 2012

A quiet thought over here in my corner of the internet

As healers we find ourselves plunged in the misery and sorrow of others.  We take great pride and pleasure in our work.  Not from the misery and sorrow, but from the alleviation of infirmity.  Not just in body, but most of all in spirit.  It is there that our own pain can find relief.

Monday, December 3, 2012

GPIO Pins on the Adafruit Pi Cobbler

I posted the code for the program I used over in the Adafruit Forums:  Raspberry Pi Home Datalogger (new window)

I'm cleaning the project up a little bit.  I'm trying to make the code "nicer" and easier to follow (I should comment my code.... one of these days).  I switched the code that runs the programs over time to a system that checks the time every second, executing specified code at the top of the minute (Poll the temperature and pressure sensors), and at the top of the hour.  It works, but my system utilization never drops below 60%, and is frequently at 97%.  I may have to go back to having the system "sleep" in between sensor checks.  I think that's a bad way to do it, but I'd like to look at having these little nodes be battery and/or solar powered.

I messed around with the RPIO program I was screwing around with.  I wanted to see which RPIO reported pins correlated with which pins labeled on the Cobbler T-Plate.
The chart is repeated over on the Adafruit forums, in the comments section of the code.
Cobbler RPIO
#4------ 7
#17----- 11
#18 -----12
#21 -----V1 13
#22 -----15
#23 -----16
#24 -----18
#25 -----22
#27 -----V2 13

Monday, November 19, 2012

Raspberry Pi Home Datalogger

I've wanted to do something like this since I first started playing with microprocessors.  The idea of an inexpensive, distributed sensor network throughout the house is really cool.  In Hollywood we always see someone sitting at their computer console accessing their security system.  It's typically a wire-frame display with sensors all over the place.  They're getting all kinds of data, and even providing the occasional remote output (displays, lights or sirens, etc).

The $35 Raspberry Pi is almost the ideal basis for this kind of sensor network.  It runs a (relatively) well developed Linux distribution.  It has two USB ports, runs on 5v, sports an ethernet port, and will support almost any video display out there.  More importantly, it has several exposed general purpose I/O pins and supports I2C.

I'm using Lady Ada's Occidentalis distro.  So far it has been incredibly stable (though it swaps between video outputs if I cut power to the system).  I'm using this distro because there are examples using Python to access the various GPIO's and the I2C.

My previous post detailed some of my earlier experiments with the system.  Since then, I have worked on building a series of programs to access a BMP085 board attached to the I2C bus.  I haven't done much with Python in quite awhile, so I'm fairly rusty and lacking a definite style.

I finished a series of programs last night that check the sensor, log the sensor data to a logfile and update a webpage, and a simple program to automate the whole process.  Currently I've got the sensor being checked and logged every minute.  The logger program is using the Python Logging module.  I'm recording the event level (I'm generating a warning if the temperature reaches 30c), the epoch time (to facilitate graphing later), and the temperature and pressure (I don't care about the altitude).  Using epoch time versus Python's asctime also saves some memory.  The log files are between 1.8 and 18 megabytes per day (one measurement per minute).  This yields recordings between 50-500 days per gigabyte.  I'm not sure why there is such an insane difference between "Total Size of Files" and "Size on Disk".  Given that it's close to an order of magnitude, I need to address that at some point.

I'm looking to increase the number of sensors on the board.  I'm playing around with a GY-80 board that I got off of eBay some time ago.  It includes the BMP085 sensor, plus an IMU/Gyro component with a compass.  It was about $20.  I'm wondering if the resolution is tight enough to work as a seismograph, though I'll have to increase the polling rate to something much higher than once a minute.  Luckily, with the processing power of the Raspberry Pi, I could poll the sensor very quickly, analyze the data locally, and only report data that was relevant.  This idea is going to be an add on after some of the other to-do list is achieved.

A big thank you to the write of the Giuseppe Urso Blog.  He's found the datasheets for all of the components on this board (I think).  The ebay seller has not been able to provide good information for me.

Things to do:
I want to improve the webpage.  Right now it's a simple Apache text page that shows the temperature and pressure, along with the system time of the latest measurement (actual time, not epoch time).

I want more sensors.  I'd like a light and motion sensor (again, we're trying to get close to that Hollywood sensor system).  It would be great to incorporate a smoke/carbon monoxide/gas sensor into this build.  A networked, web-enabled hazard detector would be incredible.  I've already got a couple of sensors for this. To get them to work, I need to grab something that will let me add analog to digital feeds to the board (perhaps this?).

I really think I'd like to incorporate a second temperature sensor.  I'd like my first sensor node to be located just inside the front door.  Running a cable outside to a sensor (MPL115A2) would be pretty easy, and would give me the ability to log the weather outside compared to the environment inside.  The capability would also be nice if Angel decides to start working with herps again.  This would allow us to monitor the inside and outside of the tank.

The webpage needs serious work.  I'd like to be able to generate graphs of the data, on the fly (click on the parameter, and it brings up a graph of the last hour.  Click on that graph it shows the last 12 hours.  Click on that, it shows the last day.  Week, month, year, etc).  My skills with Python are not up to the task yet.  Hell, my skills with HTML are essentially non-existent (My webpages in the past have all used Dreamweaver).  I need to see if I can merge the two, and someone get the Python code to play nicely with a Dreamweaver generated page.

Automated alerts.  We'll go back to the Hollywood example for this one:  When the bad guys are breaking in, I need the network to tell my cellphone (or bring up an alert on my computer).  A more practical concept:  When the house gets too hot, I need to be told.  When the house is on fire, I need to be screamed at.

I want to keep the cost down.  The average cost, per node, should be less than $100.  At this price point, folks can afford to pick up a couple of nodes to watch their doors.  As they like it more, they could grab additional nodes for other rooms in the house (I want to be able to control my small home theater system using my cellphone and an IR equipped node).

Webcam support.  The Pi has two USB ports.  Once the node is developed, there is no reason to have the USB ports in use.  A web cam is a cheap (I saw a couple for under $10 the other day) way to gain tremendous amounts of information about a locale.  Even if the camera is only used to snap stills to send to a remote site (or just log locally) the idea has tremendous value.  As it stands, I haven't even tried hooking up a camera to the USB ports.

Free and open code.  An idea like this is only as usable as the guy making everything.  I'm not an engineer or computer scientist.  If I could get a couple of those guys interested in the project, they'd be able to do incredible things with it.  More to the point, they'd see ideas that I'd never even consider.  This project, as an open source project, has real potential.  I'm going to work on setting up a GitHub account, and making my code public (I'll warn you, it's ugly code).

I'll post up more as I get it.

Late edit:  I'm starting to look at Lady Ada's Raspberry Pi WebIDE.  I spent the afternoon/evening learning BitBucket, and now I've got a BitBucket account with the code on it (PiLog).  I may not be doing things correctly, so the repository may be a little bizarre over the next couple of days.  The code that was loaded tonight was tested to work, though your mileage may vary.




Monday, November 12, 2012

Ultrasound Phantoms: War were declared

The ultrasound phantoms work.  They have a few limitations, but for certain applications, these cheap DIY phantoms are going to be far superior to expensive commercial phantoms.  In fact, part of their DIY nature makes them inherently superior.  Creating a phantom for a special purpose, such as vessel cannulation, foreign body identification, or needle procedures, is simple.

One of the big issues that has come up is bacterial/fungal growth on the phantom.  This should be expected, as the phantom is constructed of gelatin and sugar-free psyllium.  The system is essentially a giant petri dish.  I needed to find a way to inhibit microbial growth.  Not just inhibit, but stop.  Completely and totally.

I lucked out in a couple of ways.  I tried mixing some bleach into the material when it was on the stovetop.  The mixture foamed up a fair amount.  This wasn't a big issue with the first two layers, as pouring the next layer seemed to dissolve the previously settled foam.  The final layer settled with a very large amount of foam.  This foam is a random mixture of air and gelatin, and almost completely sono-opaque.

This is the external surface of the simulant, and represented a major problem for the phantom.  It was impossible to image past this interfering layer.  I tried scraping the surface foam off, but to no avail.
I spent three shifts at the hospital trying to find a way to make the system workable.  At this point I had intentionally inoculated the surface with dust and dirt to attempt encourage microbial growth. At the three day point, no growth, but no apparent solution.

Ending my shift one morning, I had my "Eureka" moment.  One of the workmen was removing floor tiles by heating them with a propane torch.  Between my broken Spanish and his broken English, I was able to convey that I wanted to borrow his propane torch.  I quickly heated the surface of the phantom, melting the top foam layer.  It liquified and appeared to completely remove the foam layer.

I waited ten anxious minutes, visibly bouncing.  I placed the ultrasound transducer and was rewarded with two easily visible vessel simulants.  Cannulation was almost exactly the same (the new material seems somewhat more friable).

Six weeks later, the phantom was still visibly free of microbial growth (I don't have the patience or material to actually test for microbial growth beyond naked eye visualization).

I'm stuck at the next step.  I'm either casting a human arm and making a mould, or I'm developing some method to snake the vessels through the simulant to increase the difficulty in cannulation.

Tuesday, November 6, 2012

Overdue Updates

For my one reader (Thanks Mom!) I know the updates for the Ultrasound Phantoms are overdue. There is progress. Unfortunately, you don't get to read about it today.

 A couple of months ago I read about this cool new computer, the Raspberry Pi. As far as I know, they've only shipped the model B board, which is $35. That money gets you a Linux computer that is slightly bigger than a credit card. You need to supply a keyboard, SD card, and that's really it. If you want to run a Windows-like GUI, you'll want to have a mouse as well. It's got a couple of dozen general purpose I/O pins to let you build in your own gadgets and stuff (Read a pushbutton, blink an LED, etc).

 The board supports at least half a dozen types of Linux. I went with the Adafruit Occidentalis build. It's build off of Debian Linux, with some added extras, This incorporates libraries for SPI, I2C, One Wire, and WiFi built right into the distro. Couple this with the already included Python (Version 2.7 and Version 3.2) and you have a junior hacker's dream.

 I'll reiterate: $35. I had everything else lying around the house (SD card, spare keyboard and spare mouse). I have a computer that will hook up to pretty much any video display I'll run into. It supports HDMI 1080p right out of the box. It plays nice audio. It surfs the web (slowly, sometimes). For less than the price of a copy of Windows, a new video game, or a tank of gas (holy crap! Yeah, and I drive a Prius).

 It's Linux. For some people that will be a deal breaker (Again, less than the cost of...). I'm having to relearn a fair amount. I used Linux a couple of times, years ago. I had a Linux box once, to toy around with. I had a dual boot netbook where I utilized Ubuntu to run some network diagnostics.

This time around, I'm really focusing on the command line interactions. The little gizmo is decent with the GUI but it does slow down a fair amount at times. However, this thing screams from the command line.

 I'm re-discovering Python as well. I learned 2.7 during college (Thank you, Dan Fleck. You got me excited about programming, and changed my life). It looks like there is a serious attempt to use Python 3 in the Raspberry Pi world.

 I wrote a quick utility to poll the Adafruit Pi Cobbler pin-breakout board. Eventually I want to get the pin states over a network.
import RPi.GPIO as io 
io.setmode(io.BOARD) 
for i in range (26): 
     try: 
          io.setup(i,io.IN,pull_up_down=io.PUD_UP) 
     except: 
          continue 
     try:
                    print(io.input(i),i)
               except:
                   
                    print('invalid pin ')
                    print(i)
          print('Complete')

I'm running the program over SSH, so I guess, technically, I'm checking the pin state over the network.

It's important to note:
First, you have to run the program with superuser permissions (sudo).
Second, the program numbers the pins differently than the pins are numbered on the Adafruit breakout. Third, you probably need to be REALLY careful with this code. It attempts to mess with all of the pins, some of which aren't GPIO pins. So far, it doesn't seem to have bothered anything.

I'm waiting for some more parts so that I can start messing around with I2C sensors. Eventually I want to build an environmental monitoring system.

Thursday, August 2, 2012

DIY Ultrasound Phantoms: Recipes, Part 1

I'm now on the fourth generation homemade ultrasound phantom. The recipe is pretty simple and low cost. I'm using store bought Knox gelatin, generic sugar-free psyllium fiber (Metamucil), and vein simulators (5mm Penrose drains). Before I link to the articles I used, I'm going to shoot the authors a quick email (The idea of using the above recipe is not my original idea). Right now I'm building the phantoms in cheap, flat food containers (ultra cheap tupperware clones). Each phantom is currently constructed of three "pours". Each pour uses the following recipe: 250-300cc's of boiling water 3 packets of Knox gelatin 1-2 tablespoons of sugar free psyllium fiber With the water at a boil, add the packets of Knox gelatin slowly. Some will probably clump up (This isn't a big deal. I'll go into that in a bit). While adding the gelatin, watch the boil, as it may boil over. Stir continuously. Once the gelatin is added, slowly add the psyllium fiber, again watching carefully and stirring continuously. If the mixture clumps a little bit, don't worry too much. If you are looking to create a relatively clear ultrasound phantom, pour the mixture into the mold (in my case, the cheap food storage container) through a strainer. This will catch most of the lumps. If you are looking to create a "Dirty" looking phantom, leave the clumps in. These clumps will sonographically appear as areas of varying tissue density. These inclusions mimic the appearance of real tissue. Our goal (at my hospital) is to create a better arm simulation for US guided IV insertion techniques. These inclusion bodies mimic scar tissue, varying tissue densities/types, and present a sonographic picture that is not as "cut and dried" as other vascular simulators. The cheap food containers that I am using hold 3-4 gelatin "pours". I have been using three pours for my prototypes. After each pour, I'm allowing the gelatin to firm up in the fridge for two hours, before pouring the next layer. After the first pour has firmed up, but before pouring the second pour, I'm putting down two penrose drains, filled with water/saline, to simulate vessels. One of the vessels I'm overfilling/pressurizing with 30% more fluid, to simulate an artery. My vessel simulators occasionally leak, so I'm not having tremendous success with the arterial simulation. Still, with the slight pressurization, the vessel does have a slightly different appearance when visualized sonographically. This is a practice, that once perfected, will enable students to differentiate between arteries and veins. I'm working on a method to create an automated pulse, so that the vessel is maintained under pressure and appears to pulsate.

Thursday, July 26, 2012

DIY Ultrasound Phantoms

I've got a lot of projects kicking around. This one kinda got pushed to the front because one of my colleagues said "We need to be able to perform this skill, but we don't have the resources to develop our techniques". Ordinarily, this would warrant some kind of project, slow and methodical like. However, I'm now one of the guys that responsible for fixing problems like this. It's not just an issue, it's "my" issue. The glove was thrown down, so to speak. The issue is that we need to place IV (intravenous) lines in our patients. It seems pretty simple: you come to the ER, sick/broken/in pain, and we need to take care of you. One of the ways that we address this is by starting an IV line and drawing blood. If you are going to do either of those things, you can do them both (in theory). Most of the time, this is a simple matter of getting a skilled provider, with the right equipment, to the bedside. Veins are typically pretty easy to find (it helps if you have sensitive fingertips, not sensitive eyes), and then it's just a matter of a quick poke. The complicating factor is the condition of the patient. Some folks don't have great veins (for that matter, some folks don't have shallow veins, demonstrated sonographically). If you can't find a shallow vein, you need to find a deep one. Beyond a few millimeters, you simply can't feel a deep vein. You need some way to visualize those deep veins. Enter in the bedside ultrasound system. We use these during traumas to examine patients for internal bleeding. Occasionally we use it to check on pregnant women and the health of the fetus. Other than that (this comprises maybe an hour or two a day, at most) our bedside sono units are dormant. They're like any other medical imager: not cheap. A couple of docs figured out that we can use these things to visualize deeper veins (and arteries). From there it was a simple trick to guide an angiocath (an IV needle) into one of these veins. It's a pretty cool solution. It's not as invasive as a PICC Line (Peripherally Inserted Central Catheter), or other central line (subclavian, femoral, etc), and it's more comfortable than a jugular line. It's also a guided solution. No blind sticking, and you can see the structures in the patient's arm to avoid complications (inadvertent arterial puncture). However, it's also a bigger deal than a simple IV stick. You're taking an angiocath that you would normally poke a couple of millimeters down, and going 10-20 millimeters down into someone's arm. You're asking more of your patient. More to the point, if all you have done, to date, is a standard IV placement, you're asking a lot of yourself. I know that the folks that I work with care for their patients. Or they're crazy. No one would work under these conditions (low pay, constant threat of bodily harm, verbally abusive clients, incredible work loads) for years. They all seem highly functional, so I'm going to stick with "Compassionate". I think my colleagues generally care for their patients, like they would care for their own family. So it's a lot to ask them to jam a needle two to three inches into someone's arm, using a technique they're uncertain of. In fact, it's just wrong. We have ultrasound simulators, called phantoms, that we borrow from the medical school. They're around $300-$400 dollars, and they're simple blocks of gel (silicone? dunno) with vessels suspended within them. The sonographic image looks pretty nice: a couple of straight vessels a couple of centimeters deep. It's not what a patient's arm looks like: vessels that are all over the place, twisting and turning, with various inclusions scattered through the arm. We needed to come up with a way to transition from the perfection of the standard ultrasound phantom to the reality of patient care. Oh yeah, one more thing: No budget. Solutions will start in the next post. (I'm going to try out a cliffhanger ending for this one)