Wednesday, December 16, 2009

Hawaii, as you like it.

Checking the solar weather, you will see there has been some real sunspot activity lately. Is that a good enough reason to check out the HF bands? You bet!

There was exactly one SSB signal on 15 meters, and it was Jerry, AH6V on the Big Island of Hawaii. He was a good S7 here in CT, a very steady and clear signal. Turns out he was running 300 W using exclusively solar power! That's remarkable -- an inspiration, even.

We had a pleasant QSO. I was able to tell him about my new solar garden light system that struggles to produce a couple of mA for a few hours after sunset in the New England winter...

Where were all the rest of the Hawaiian hams?

Sunday, December 06, 2009

Yet another clock program

So I was grumping to myself that my nice, big, cheap Timex digital clock wasn't doing the best job for the ham station. I had modified it for 24 hour time, but the 10 hours digit could only provide a squiggle and not a proper "2" for 20-23 hours. That, and it is physically in the wrong place (away from my computer screen) and it doesn't show civil (local) time along with UTC. I've searched for a good hardware solution with no luck.

Then I looked at some of the many Linux clock options out there. None of those was exactly right for me. I wanted digits I could read across the room, but that would fit onto my crowded Linux desktop. Anyway, it's always more fun to use software you build yourself. I pulled out my Python and wxWidgets (wxPython) references and set to it. As with designing and building your own hardware, half the reward is the stuff you learn along the way.

So here is the product. If it looks like it might be useful to you, it should run on any recent Linux system with no hassles. I also checked it on Windows XP and Windows 7 for good measure. It works there, but you will have to download Python and wxPython from the web. (Linux systems like Ubuntu provide these as part of their repository system.) MacOS will likely also support this program.

This is free (as in beer) and open source software, distributed under the GNU General Public License. That means you can download it and modify it to your taste, as long as you are willing to make your improved source available to the community if and when you distribute your new version.

In a future version, it might be worth working on desktop space efficiency. You notice that nearly half the window area is wasted on Gnome's frame decorations and menus. That could be reduced by using "shaped" frames at the cost of complicating the code and creating non-standard window behavior.

Thursday, November 26, 2009

Why Linux/OSS for Amateur Radio?

How to explain to a non-computer-geek ham what Open Source Software and Linux are all about? OSS and Linux are important to software users the same way a good repair manual and schematics are important to hams. Not every ham knows what to do with schematics, but those who are inclined to open up, understand, repair, and modify their equipment certainly do. Without being able to see what's inside and what connects to what, there is very little you can do. That's exactly why you need to be able to access and work with source code when it comes to software.

These issues will directly affect relatively few hams. Many are "appliance operators" when it comes to software, just as for hardware. For them, a proprietary OS may be a good choice because of its familiarity and the huge choice of available software.

We can admire the dedicated hams who build their own stations and who are on the cutting edge of new hardware technologies. It's the same with software. With software becoming increasingly central to amateur radio (in SDR, digital modes, etc.), competence in coding is getting to be just as important as operating a soldering iron.

While you can roll your own software from scratch, it can be far more efficient (and -- as we like it -- cheaper!) to build your code in the OSS "ecosystem", making use of many libraries and tools that are free for the download. OSS really pays off when you give the fruits of your labor back to the community to spur further development.

These are a few of my open source thoughts!

(a comment on a Linux Journal blog)

Wednesday, November 04, 2009

AC plugs around the world

If you've traveled around the world at all, and you're a ham operator, you've surely wondered about all those different AC power plugs. So who has the best? We have a view from the UK here.

Naturally, the UK wins! I can sympathize with the safety advantages: fusing in the plug and automatic shutters in the socket. (This is for a "manly" 240 V, you know.) But the authors don't consider a few other questions: cost, size, and weight, for example! Those plugs are heavy, bulky, and expensive. The wall sockets take a lot of area for each power point. Don't even ask about outlet strips or "cube taps".

Monday, October 12, 2009

My New "Boonton" Model 59 GDO

Here is my big acquisition from the Nutmeg Hamfest this weekend. It is the classic Measurements Corp. Model 59 Megacycle Meter. Some of us would know it as the "Boonton Grid Dip Oscillator." From what I've been reading, a variant of this unit was first produced in World War II. The manual, available here and here on the Internet, bears a 1947 date. The company was sold to Edison in 1953, so this unit was probably produced in the early 1950's. (The meter/power supply unit is serial #750, the oscillator head is #695, the coil set is #734.)

The developer was the well-known engineer Jerry Minter. (See his IEEE.tv interview that prominently shows the Model 59.) His was one of a number of instrumentation companies active in Boonton, N.J. after the War. There was no connection to Hewlett Packard. (An impression I had at one time.)

I used a Model 59 extensively in the 1960's and 70's, but I did not know that it was a classic even then. Now, it's very satisfying to have my own! It is far superior to the Heath GDO and tunnel diode dipper that I have also used. I see one advertised at $75 on E-Bay, but I got mine for about half that.

The grid dip oscillator is a very handy item for generating signals, checking for resonance, and making rough frequency measurements. This one covers the range 2.2 to 420 MHz in 6 bands.

[Click photos for more detailed versions.]

It is necessary to inspect all new equipment here at AA6E. (The unit was in remarkably good condition. It worked the first time and only needed a little cleaning.) This device had wonderful "build quality" -- as we say these days. The important part is the tunable oscillator assembly, which is meant to be hand held. The inner works are all gold plated (!) and very sturdy. A single 955 "Acorn" triode tube is the active element. Lead inductances are kept very low to allow UHF operation.



Dial calibration is meant to be quite good. Each unit's coil set is supposed to be calibrated to work with a specific oscillator head, although my coil set's serial number doesn't match my oscillator. The tuning "feel" is very smooth.



The power supply uses a 5Y3 GT rectifier and an OD3/VR150 voltage regulator tube. The red filaments and the purple regulator discharge, along with the incandescent (!) pilot light, are a treat by themselves. Not shown is the neat wiring harness beneath the power supply chassis.



Now I need to build another RF gadget!

Sunday, October 11, 2009

Nutmeg Hamfest 2009

The Connecticut Nutmeg Hamfest for 2009 went on in glorious fall weather on October 11.
Note: Click on photos for more detail.

A hamfest always begins with checking out the competition's mobile installations. Relatively few HF installations compared to Dayton.


Always enter by way of the flea market to see what's up. People noted that the larger stores and manufacturers were absent this year. There were still a lot of "mom and pop" flea market vendors.


This was the ARRL Connecticut Convention. Left, our Director Tom Frenaye, K1KI. Right, Section Manager Betsey Doane, K1EIC.


A small, but engaged audience for the League Forum.


Joel Hallas, W1ZR, gave a popular talk on multiband antennas. Guess which favorite antennas really don't work very well on the HF bands!

Another rapt audience.


The flea market had some beautiful nostalgic equipment on display. No, it wasn't selling very fast, but it was nice to look at and reminisce. I got a Boonton grid dip meter, just like the one I used 35 years ago!


The Shore Point club (West Haven) had its very snazzy emergency communications trailer on display. It doesn't look like it's worked through many disasters yet!


The Shore Point trailer command desk. Two people, five microphones, and then some.



I thought this was the highlight of the show. It is W1RT's HF - 10 GHz hill-topping van. An amazing collection of gear, all oriented to VHF, UHF, and microwave contesting.


Overall view of the vehicle area. There was also a converted ambulance from the Danbury group.

I didn't know they gave license plates for Unix veterans, but here is one I found:

Friday, September 25, 2009

Commercialization: ARRL Does a Good Thing

Today, the ARRL Board of Directors released fascinating guidelines on the "Commercialization of Amateur Radio: The Rules, The Risks, The Issues". This paper results from the apparent growing problem of some for-profit or non-profit organizations looking to Amateur Radio frequencies and equipment as cheap solutions for their disaster communications needs, without very much regard for the FCC regulations or the culture or history of Amateur Radio.

The basic points, in my reading: (1) Amateur Radio operators, by FCC regulation and with narrow exceptions, are volunteers and are normally not allowed to communicate when they have a pecuniary interest, for example, on behalf of their employers. (2) Organizations may not use Amateur Radio frequencies when there are alternate licensed (or unlicensed) radio services that can be used, such as the Land Mobile or Commercial Mobile services. (3) Provisions in the FCC regulations suspending some normal rules during time of disaster or in emergency situations do not allow organizations to plan communications on Amateur frequencies without obeying FCC licensing regulations.

The ARRL Board does not cite specific instances of real or planned abuse or exploitation of Amateur frequencies, but we can guess what they might be. An organization wants to assure business continuity by using ham equipment and frequencies, either with or without help from licensed Amateur operators. There might be many variations of this picture, ranging from purely commercial entities to non-profit charitable organizations that want to provide community services, but with in-house paid personnel, with or without licenses. In any case, the nature of the Amateur Radio Service and its frequency allocations would be put in jeopardy by operations that are making an end-run around inconvenient FCC regulations.

The League Board makes a clear and urgent call for Amateurs and Amateur organizations to read and understand the relevant FCC regulations, and to carry the message to all forums where emergency communications services are being planned and coordinated.

Beyond their informative and persuasive arguments about appropriate use of Amateur frequencies, I though the Board's consideration of the roles of the FCC, the Amateur Radio community, and the ARRL itself to be very interesting. We must remind ourselves that Amateur Radio is largely a self-policing enterprise. It is unrealistic and unwise to expect the FCC to specify precisely which uses are acceptable or not acceptable. (Be careful what you ask for!) It is up to each of us to understand the law and the regulations and to develop a consensus of what kinds of operations to support or to oppose.

(Reading between the lines, now.) The scary scenario we want to prevent is having large governmental, non-profit, or for-profit organizations, see how sweet the inexpensive, relatively informal world of Amateur Radio is, as a way to satisfy their urgent needs. In some cases, Amateur groups may be only too willing to accept generous donations of equipment and other support in the name of emergency communications operations. In reality some of these large organizations have a lot of economic and political power and could turn around and actually threaten Amateur Radio spectrum in the future, by co-opting our frequencies in the name of emergency services.

The commercialization issue is just one example, to my mind, of the growing tension between Amateur Radio, which is traditionally shoe-string, informal, improvisational, and local, as against the well resourced, highly structured, and even quasi-military environment of large-scale emergency response organizations and corporations. Good luck to the ARRL, and to us all, in working that one out.

Update: I should explain that I've been a ham for 50+ years, but apart from the odd Field Day, have had little involvement in the emergency communications side of Amateur Radio. (I admire those who do!) I like DX and casual contacts using CW, PSK31, and even SSB. And I do other stuff that you can read about on this blog.

Saturday, September 05, 2009

Amateur Radio vs HPC & OSS

Lately, I've been wondering what aspects of Amateur Radio would benefit from high performance computing technology. This is prompted by my latest PC - built around an Intel Core i7 processor with 6 GB of RAM. This CPU has 4 cores and 8 logical (hyperthreaded) processors. The benchmark results are very nice: 20,500 Whetstone MIPS (floating) and 45,200 Dhrystone MIPS (scalar) across 8 simultaneous threads. [BOINC's built-in benchmarking - not to be taken at face value!] In addition, we have graphics processors (GPUs) that can be programmed with CUDA, OpenCL, and similar methods.

Some general areas that occur to me:

  • Digital Signal Processing (Audio-ish) - This is the technology behind much of today's Software-Defined Radio (SDR). A significant amount of processing is required for mixing, modulating, and demodulating signals. Dynamic spectral displays are useful. However, the actual amount of horsepower required for typical amateur applications is fairly limited. In most cases, the bandwidth of interest is less than 128 kHz (the range of high-end audio cards for band monitoring displays) and very often less than 3 kHz - the maximum audio bandpass of most communications transceivers. As long as we are working with audio-type signals - PSK31, Olivia, RTTY, and SSB, we are not likely to tax the latest CPUs. You can use more power if you want to listen to many channels at once, but even so...
  • DSP (Video) - Modern HDTV signals require a lot of processing. In production work, this would mostly be done in special-purpose processors (I think), but amateurs might prefer to operate with commodity PC and GPU hardware. I am not aware of amateur HDTV work for over the air transmission, but there may well be some. My initial look at HDTV (and DTV) was discouraging. The sheer numerology of options is overwhelming -- although of course, you may only need one mode for ham work.
  • DSP (Video 3D) - How about ham communications through 3D immersive environments? Four-pi visual fields, multichannel sound: you could get the gamers into that! (But why ham radio? Why not FIOS-type connections? We can offer off-grid operation and mobility away from cell towers!)
  • Radio Science - Propagation Analysis and Prediction through detailed atmospheric modeling, ray tracing, etc. This could be extended to real-time monitoring of the HF bands at points around the world, and trying to build up a real-time DX prediction service.
  • Exotic wideband signalling. Using microwave bands and pursuing highest bandwidths. Would this be more a question of very fast D/A conversion and I/O more than general purpose computing?

There must be more areas for HPC in Amateur Radio. As a long-time HF operator, I'd like to be able to translate CPU power into greater sensitivity and interference rejection. We can certainly consider computationally intensive modulation and demodulation schemes, but it's not clear what returns are available in a channel's SNR given the practical limits on transmitter power, modulation bandwidth, antennas, etc. The ionosphere degrades the signal so that frequency resolution below about 1 Hz or so is not going to be very productive. How can a "supercomputer" overcome that?

Ideas, anyone?

[This is cross-posted with my SourceForge blog: http://sourceforge.net/userapps/wordpress/aa6e/. Why do I have an SF blog? For topics relating to open source software, to see what WordPress is all about? Something like that!]

Friday, August 28, 2009

Orion Surgery

My Ten-Tec Orion transceiver was 5 years old in February. It's getting to be an old timer -- in computer years, at least. It needed a few chores done.
  1. The LCD panel was flickering. Internet scuttlebutt says this may be because of impending failure of filter capacitors that were undersize (in terms of ripple current handling).
  2. With the current version 2.xx firmware, the LCD contrast is close to zero after a hardware reset. This can be a serious problem, depending on room temperature. Contrast is normally set via a menu adjustment -- that requires you to navigate through the LCD display. The display could be so faint that you couldn't make the adjustment. There is an internal bias control that should fix this.
  3. Finally, I have been seeing the "RIT freeze" problem that others have noted. At times, the RIT control loses the ability to make any adjustments. Many people have reported this problem, but the cause and cure have not been well understood. It seems that you can fix it temporarily by grabbing the RIT/XIT encoder control and twisting hard in odd directions -- as if there were a bad solder joint on the PC board. A recent report suggests the trouble is actually with one of the board-to-board connectors making unreliable contact.
All the above have been discussed on the helpful Ten-Tec reflector, which I recommend to other Orion users. I have been saving up these problems for some time now. Hopefully, I only have to disassemble the Orion once!

Making a long story short, I pulled the rig out of my operating position, unplugging many cables, unscrewed many screws, and finally spread out the works on my bench. The RIT fix requires disassembling the front panel and the LCD/computer board. That's a fairly big operation, particularly since a wrong move with a tool could lead to a very costly repair job.

I located and replaced 3 electrolytic capacitors on the A9 board with higher voltage, low ESR units, per the helpful analysis of N6IE. Soldering was tricky, to prevent inadvertent short circuits and to get enough heat into the ground connections, which were not thermally isolated from the big ground planes on either side of the board. (That could have been avoided by a better PCB layout!) OK. That's problem #1 done.

The email instructions say that to solve #3, you need to look for bent or misaligned connecting pins on the computer / LCD board. I looked very carefully, but saw no misalignment or other problem. Still, it's possible there was some oxidation or dirt. I cleaned the pins and the sockets as best I could. So problem #3 might be resolved.

The LCD bias issue (#2) turns out to be easy. There is a control on the LCD/computer board that sets the default bias for the display. You just adjust it for a good display with the LCD menu set at the default 50% level (and of course with the power on and the rig having a chance to warm up).

Accomplishing all this, I reassembled the Orion and put back all those screws, and put the rig back in the operating position. Everything appeared to work, except no transmit power. Oops! There are a lot of reasons why this could happen. Fortunately, I did not smell smoke, and that eliminated some of the more expensive possibilities. Still, I'm not sure how much bench testing and repair I would be able to do, if this turned out to be a non-trivial problem. Thinking of where that shipping box might be, and how much Ten-Tec charges...

So, what did I really do in that repair process? I changed out the capacitors, true, but the fact that everything works in receive mode indicates that the power board was probably working. The other thing I did was to unplug many internal cables, disassemble the LCD boards, and reassemble. There could be a loose connection or a misplaced cable.

The problem was "fixed" by opening up the box all the way again, and carefully reinstalling all the cables and screws. We have transmit power for now. I hope it stays that way!

The triple repair job is done. The LCD is behaving much better than before, and at least for now, the RIT/XIT controls work smoothly.

One thing I hadn't counted on was the trouble of reconnecting my ratsnest of cables to the Orion. I have a linear, a transverter, computer audio and controls, and other gadgets that need to be connected. All these just got hooked up over several years, and of course I had nothing documented on paper. Figuring out how to reconnect took a while, but it should be better next time. Now the cables have labels on them. Documentation is still pending...

Friday, August 21, 2009

Atomic Disintegration

My new, humble computer based on Intel's Atom CPU was a great little project at a low cost. It benchmarked at about half the performance of my older Athlon XP 2000+ on a per-thread basis, and it provided two hyperthread "processors" for Linux to use. All for about $64 for the board and CPU. (And it is 64-bit capable, but the value of a 64-bit OS is unclear in such a small system.)

Then, for larks, I set it up to run as a member of my World Community Grid effort. That had it running two threads for 24 hours a day. This was somewhat pointless, given that the new Intel Core i7 system is so much more powerful*, but it did manage to churn out some work-units for the Cause.

All was well for about 4 days. Then ping!, the Atom froze. I could reboot into BIOS sometimes, but couldn't load Linux or even memtest86+. An actual hardware failure -- I haven't seen many of these in recent years.

Over a period of days, I got more acquainted with my UPS driver, as I tried substituting various parts. Suspecting the hard drive or CD/DVD drive (old IDE technology). Tried a substitute 1 GB DDR2 RAM. No help.

So the fault was either in the Intel D945GCLF motherboard or the power supply. The motherboard has about 10 million times more transistors, so that seemed the likely culprit.

Using Amazon.com's amazingly efficient returns/exchange process, I had a new mobo in quick order, and all now seems well.

Do I dare run the WCG application any more?

As to ham radio, this system is my logging and digital modes computer for AA6E. We were dead in the water. I could have fallen back to my mic or key and used paper logging, but I wasn't quite that desperate!

--
* Both in absolute (45,232 MIPS vs 3,060 MIPS) and in power-specific terms (302 MIPS/W vs 64 MIPS/W).

Thursday, August 13, 2009

Comparative Blogging

I thought I'd try setting up a blog using the new WordPress facility at the famous Open Source Software facility - SourceForge.net. So far, the format is a little disappointing. What do you think?

Update 8/21/09: I see that the SourceForge.net blog (above) now sports a reasonable (if unimaginative) blueish skin. SF has now provided us with two (count them, two) themes!

Tuesday, August 11, 2009

Grid Progress

The big application for Gimli (Intel Core i7 system, described earlier) has been participation in the World Community Grid, which is an IBM-sponsored project for channeling volunteer computer systems into a "grid" that can apply supercomputer-level power to selected scientific problems, mostly in life sciences. WCG uses the BOINC framework from UC Berkeley. There are many other grid projects using BOINC, and that's where most of the physical sciences and math projects seem to be. (And where my sympathies really lie. I may have to jump off the WCG ship eventually.)

My desktop system is "competing" against zillions of computers (1,328,064 "devices" under control of 468,410 "members"). On the one hand, many members have tried the project but have not actively contributed. On the other, many of the members command fleets of computers in academic or industrial settings, and are able to direct much of their otherwise wasted CPU cycles to the WCG.

In my case, my one computer, running 8 parallel threads, delivers about 8 days of "computing" for every calendar day. My rank in the WCG project is improving on a daily basis. About 30,000 members have provided more processing power (over the lifetime of the project) than Gimli has in about 6 weeks. We'll never be #1, but we should be able to climb the ladder for some time to come.

Remarkably, Gimli is able to run 8 CPU-bound jobs around the clock (at low "niced" priority) without significantly affecting interactive work - - browsing, email, office applications, etc. Only in a few cases is there a noticeable slowdown, e.g., firing up a large VMware image to run Windows XP.

Readers of this blog may ask what all this has to do with Amateur Radio - our raison d'être. That's a good one. I am looking for ham applications that can profit from high-power desktop computing. You might think that Software Defined Radio would be one, but the SDR work I know of really works well in smaller-scale systems. Digital video would be a candidate. Do you have suggestions?

Friday, August 07, 2009

Gimli Perfected

If you read my earlier post, we left this new Gimli system (Intel Core i7 920) on the edge in terms of operating temperatures and cooling. We experimented with ducting and undervolting and got almost to the point of running at an acceptable temperature at full CPU load. Almost, but not quite.

The problem was Intel's default stock chip cooler system, which is marginal if you want to run full out for an extended period. Marginal, at least, if you don't implement their recommended side ducting system.

I wanted to get this problem solved so that I will not have to watch my operating temperatures so closely while running BOINC or other intensive applications. After some research, I ordered the Cooler Master V8, a large heat pipe / radiator / fan system that fills up my computer case very nicely. (Leaving a little room, but not much, for hooking up the wiring afterward. This kind of assembly is not for everyone.)

The following photo gives an idea of the scale and airflow. The flow from the cooler's embedded fan conveniently goes directly to the case's rear exhaust fan.
The bulk of the radiator system really does take up nearly all the volume above the CPU, but fortunately there were no mechanical interferences on my Gigabyte EX58-UD4P motherboard or RAM. The whole thing just fits in the Antec case without trouble. It does weigh almost 900 g (about 2 pounds). While the motherboard mounting seems fairly secure, I am sure that my computer will not fare too well if dropped on a concrete floor.

Results

The quick before and after comparison, with my undervolted i7 (Vcore=1.01250 V):

Temperatures are (Tgpu, Tcore, Tamb, Tchip in deg. C)
8 threads of BOINC code (Proteome Folding Project)

Before: 48, 80, 39, 75
After: 45, 52, 36, 45

The final (chip) number is the best one to focus on. An improvement of 30C is a lot more than I had expected, but I will accept it! This is all with the cooler fan running at 1767 rpm. It will run up to about 1970 rpm, but that noticeably adds to the noise level. The higher speed does not lower the operating temperature by much. We might need to speed up the case's exhaust fan if we care about cooling more than noise. (I don't!)

I should emphasize that "normal" interactive computer operations would be much less than the 100% load we are discussing here, so that even the stock Intel cooler would be fine. On the other hand, if you are spending the money for the i7 system, why wouldn't you want to run it full blast?

According to my (perplexed) intuition, having read much of the Intel literature, it should be fine to run at a chip temperature of 60 C, at least. So there is a lot of headroom to explore overclocking with higher core voltages.

Sunday, August 02, 2009

New Atom System

The latest computer system to come to life here at AA6E is based on the Intel Atom 230 processor. The Atom (out for a year now) uses the latest 45 nm fab technology for a chip that is optimized for low power consumption, but it supports either 32- or 64-bit operating systems and hyperthreading, giving the appearance of a dual-core system to the OS.

I bought the Intel D945GCLF board, which has the mini-ITX format -- it's really small compared to my other ATX and mini-ATX systems.

Intel D945GCLF Motherboard and Atom 230 Processor

The board mounts nicely in a spare Mini-ATX computer case, with all kinds of room to spare. I attached an older 20 GB hard drive and DVD to the legacy IDE port. Eventually, I may repackage the system into a smaller case, with more modern IO devices. But one of the nice features of the motherboard is its support for ancient interfaces -- Serial, Parallel, PS/2 Mouse, keyboard, and IDE, along with newer SATA, USB, etc.

The focus for me was low power consumption and low out-of-pocket cost. The CPU is supposed to require only 4 W and the basic board was $64 at Amazon, including CPU but no RAM. The major glitch building the system was that my numerous old PC power supplies did not supply the ATX12V connector (the 4 pin 12 V CPU power connector) or any SATA-style connectors. So, while I might have been able to wire up some connectors to use with an older supply, I sprang for a new unit. Selecting one was not as easy as it might have been. The power demand is minimal (under 100 W), but if you want "quality" (efficiency, power factor, noise filtering), those features are promoted for larger supplies. I ended up with a 500 W, $35 unit.

Performance

A thorough review of an Atom 230 system is available at Tom's Hardware. I did some testing that is more specific to my environment, comparing it with other systems here.

A favorite benchmark, which works with multi-threaded systems, is to "make" the hamlib system from source. (I am a Hamlib developer.) The make "-j" switch lets you specify how many parallel threads to divide the workload into. This is a useful test for some kinds of programming work, but it is hardly representative of all possible applications.

Pentium III 800 MHz
(This is the one that the new Atom system will replace.)
  • Idle power: 112 W (AC at wall plug)
  • Make (single thread) : 483 sec.

Athlon XP 2000+
  • Make (single): 239 sec

Core i7 920 (See prior article here and here.)
  • Idle power: 100 W
  • Full CPU load: 150 W
  • Make (-j 1): 41.8 sec
  • Make (-j 8): 17.3 sec
  • Make (-j inf.): 16.1 sec

Atom 230
  • Idle power: 40 W
  • Make (-j 1): 276 sec
  • Make (-j 2): 229 sec (showing improvement by hyperthreading)
  • Make (-j inf.): requires excessive virtual memory, never completes

Conclusion

The Atom-based system works very nicely for my application: running all my ham radio operations - digital modes (fldigi), logging (xlog), transceiver control, etc. The system has a responsive "feel" under Ubuntu that is equal to anything I was using before my Core i7 system came along. The low operating power and silent operation are pluses, and the small board size would be handy if I put it into a smaller case. Cheap is good, too.

Update:
BOINC benchmarks, Atom 230 (40 W)
612 floating point MIPS (Whetstone) per CPU
1530 integer MIPS (Dhrystone) per CPU

BOINC benchmarks, i7 920 (150 W)
2450 floating point MIPS (Whetstone) per CPU
5654 integer MIPS (Dhrystone) per CPU

Since the i7 920 has 8 "processors" (4 cores) and the Atom 230 has 2 "processors" (1 core), the i7 is much better in CPU MIPS per Watt.

KY Amateur Radio in the NY Times

In Kentucky, Officials See Ham Radio as a Backup


OWENSBORO, Ky. (AP) — Local officials are turning to older technology to solve some of the communication problems they encountered during January’s ice storm and the windstorm after Hurricane Ike in 2008.

Full article...

Thursday, July 09, 2009

Bing!


Try Bing.com, the new Microsoft search site, on "ham radio". Interesting results, especially the videos at the bottom.

http://www.bing.com/search?q=ham+radio

Saturday, July 04, 2009

Cooling, Undervolting, and Greening Gimli

So here's the best answer so far. I don't want to make an unsightly hole in my computer case's side wall (see prior post), but I need to get air flow under control for best cooling my Core i7 920 chip. The solution is to use some 4-inch plastic air hose with wire reinforcing to direct supply air from one of the two front air inlet fans (lower right on photo) to the CPU cooler fan. This minimizes the "blow back" of hot air from the CPU cooler outlet to the cooler inlet. This solution still works well when the case is buttoned up. We don't have to worry about the acoustic and radio frequency noise that would leak out through any new holes.

The hose is available at hardware stores for clothes dryer connections.

So, the airflow and CPU cooling is about as good as it is going to get without moving to a more exotic chip cooler scheme. The next step toward controlling operating temperature is to minimize CPU power consumption. My philosophy is to keep the specified CPU performance (i.e., clock rate), but to reduce operating voltage to a safe minimum. This is called "undervolting".

(Many computer tinkerers want to get the highest performance possible from their chips, e.g., by "overclocking". This generally requires operating at higher voltage which in turn means more power dissipation, bigger and noisier cooling systems, etc.)

I devised a simple Python program that loads all 8 "processors" (4 cores x 2 hyperthreads/core) for a specific interval and measures the temperature rise. This is a good test of the cooling system and the power level of the CPU (as a function of core Voltage, clock rate, etc.) It is not meant to be a typical application program, just a source of heat. (Actually, it is such a trivial program -- a tight loop -- that it causes more power drain than typical real programs.)

The standard CPU core Voltage (Vcore) is 1.29375 V. Our Gigabyte GA-EX58-UD4P motherboard provides great flexibility in selecting all kinds of CPU parameters, but I am mainly varying Vcore. Running the test program, I observe a final case temperature (Tc) of 65 C and mean junction temperature (Tj) of 75 C. The AC supply power is about 204 W under load, and 98 W at idle.

I tried successively lower Vcore settings. At Vcore = 0.91250 V, the computer would begin the BIOS display at boot, but would fail before it completed. At Vcore = 0.95000 V, it booted partly into Linux, but it failed before getting to the Ubuntu login. At Vcore = 0.97500 V, we had a successful boot. The supply power was 150 W under load, 97 W at idle. Final temperature was Tc = 53 C, Tj = 57 C.

It is not clear how stable the system would be at 0.975 V under all conditions (room temperature, computing loads, etc.), so I raised the voltage a couple of steps to Vc = 1.01250 V. At this voltage, we ended with Tc = 53 C and Tj = 58 C. AC power under load was 155 W; at idle, 97 W.

From Intel's data, it is not clear what the ultimate "safe" operating temperature of the i7 920 chip is. As we said earlier, it comes down to a subjective judgement about how much stress is acceptable. Less is always better from the standpoint of system stability and longevity. Some of Intel's charts suggest that Tc = 68 C is an acceptable maximum. In that case, our operating maximum of 53 C should be fine. (Note that we must allow for ambient temperature changes. Our measurements here are at Tamb ~ 20 C, but our ambient could rise to ~ 30 C on a hot day.)

For now, the engineering is "done", and we can go back to BOINC running all day without fear. We have also trimmed about 25% from our power bill.

Monday, June 29, 2009

Gimli Explorations (Core i7 Thermal)

My earlier posting introduced Gimli, our new Intel Core i7 920 system. Time for some progress reports. My main focus has been operating with a heavy scientific computing load -- running under the BOINC framework for distributed computing on large-scale scientific problems, such as the Search for Extraterrestrial Intelligence (SETI), protein folding, etc.

The system is based on a Gigabyte EX58-UD4P motherboard in an Antec "Solo" case:

The CPU cooler is the stock Intel system. The photo shows the bottom of the cooler with the central copper disk, which after coating with heat sink compound presses down on the top of the CPU. Under the heat sink is a fan that draws air down (up in the photo) from "cool" the case interior, past the fins, and then radially outward to be drawn away by the case's nearby exhaust fan.


Normally, the Antec case is covered on the side by a solid removable panel, which works very well for noise control and appearance. However, Intel's Chassis Air Guide manual suggests a different "solution" for good thermal performance. They propose a duct leading from ambient air external to the case down to the top of the CPU cooler. This will prevent recirculation of warm air within the case that will tend to heat the CPU unnecessarily. So, I did experiments under 3 conditions: (1) cased closed with normal side, (2) side removed, and (3) side removed with temporary duct installed. The photo shows the temporary duct in place, made from two 8.5 x 11 inch sheets of paper and Scotch tape. (I suppose I should have used duct tape...)


The computing load ranged from zero (Ubuntu Linux 9.04 idling) drawing 100 W AC power, to full load (8 parallel threads of SETIathome jobs) drawing about 200 W AC. The LCD display is not included in power measurements, and I am running at stock clock speeds -- no overclocking.

There are 9 temperature values reported by the CPU: one junction temperature for each hyperthread (two per CPU core) and one temperature for the CPU case (below the chip). There is considerable discussion about how to treat these values. For example see the post at Tom's Hardware site. There are calibration questions, and there are questions about what actual CPU temperatures should be permitted. The Core i7 has several self-protection features that will control power dissipation if temperatures are allowed to get "too high". Unfortunately, there is apparently no published maximum temperature spec for the chip. In any case, the maximum we want to tolerate is subjective. Lower temperatures generally mean more operating stability and longer lifetimes.

It's not clear if there is any meaning to any difference between the two temperature readings associated with each physical core -- or even between the different cores, assuming that the Linux scheduler distributes the computing work equally among cores. These junction temperatures typically vary up to +/- 2 C among the group. I will report only the approximate median Tj of the 8 junction readings. I also report the case temperature Tc.

The numbers given are summaries of a number of readings. I don't claim great statistical power -- only a rough comparison of the options. Room temperature ~ 74 F.

Antec side panel on
Sytem Idling
Tj = 43 C; Tc = 41 C
System 75% load (6 jobs)
Tj = 72 C; Tc = 63 C
System 100% load (8 jobs)
Tj = 75 C; Tc = 63 C

Antec side panel off
System 100% load
Tj = 72 C; Tc = 62 C

Antec side panel off; paper duct installed
System 100% load
Tj = 67 C; Tc = 62 C

Moral: Intel's ducting scheme really does make a substantial improvement. A 5 - 8 C temperature reduction is significant, but in the absence of hard temperature specifications, it is hard to say how great a benefit to system stability and lifetime will result.

Another moral is that if you buy a packaged Core i7 system from a mainstream manufacture, you are likely to get an "engineered" thermal system that is better than what a do-it-yourselfer is likely to concoct with generic motherboards and cases. Of course, it's less fun and more expensive to buy complete systems!

The cooling improvement is specific to Intel's standard CPU cooler. There are numerous more exotic chip coolers on the market, which may not require a ducting system.

My options seem to be:
  • Accept up to 75 C junction temperatures when running full loads.
  • Cap operation at 50 - 75% of full load.
  • Cut a hole in the Antec side panel and install a permanent duct.
  • Check out a different CPU cooler.
Note also that the BOINC / SETI CPU load is only one particular application. It is not hard to make programs with more or less power dissipation, but this test does seem to be representative.

Thursday, June 11, 2009

Say hello to Gimli

Gimli is a new home-built system comprising an Intel Core i7 CPU, 6 GB RAM, 1 TB HD, and GeForce 9500 graphics. It runs 64 bit versions of Ubuntu 9.04 and Windows 7. The chip provides 4 CPU cores with hyper-threading, or 8 concurrent threads of execution, more or less.

I am hoping to start to understand CUDA programming and otherwise to see what kinds of DSP we can do. Oh, and maybe load a game or two.

Friday, May 15, 2009

A New Featurette


I incorporated "share this" into this blog. Now you can repost all the wisdom here to Facebook or what-have-you! See the option at the bottom of each posting.

Thursday, April 02, 2009

Do Not Read!


The latest technology toy here (and supposed moneysaver) is the Amazon Kindle 2. I unsubscribed from my dead-tree New York Times and signed up for the Kindle on-line NYT. I calculate that after 18 months or so, the savings will have paid for the Kindle. We'll see about that.

It has been an interesting experience reading the newspaper as if it was a book. I find I tend to get wrapped up in long stories much more easily than before, and I now appreciate some of the fine writing skills of those ink-stained wretches at the Gray Lady. If I let myself, I can spend hours more every day with the paper.

On the other hand, it would be nice if the Times were a little more careful and intentional about formatting itself for the Kindle. Navigation is not too hard, but many of the cues about which articles are more significant than others is lost. Letters to the editor loom as large at first blush as feature articles. (There is no word count provided, as there is for Newsweek.)

Today's bit of strangeness appears above. An obituary is published under the title "Do Not Publish". If that came out in the print edition, there would be hell to pay, but maybe quality control for the Kindle is an afterthought. I checked the online web edition, which has the correct title.

Tuesday, March 31, 2009

Why good QSOs go bad (propagation)

KB6NU has an interesting propagation question about why good QSO's go to pot.

This evening, just after dark, I called CQ on 40m CW. On the second call, VE3QO, in Ottawa, ON replied to my call. On that first transmission, he was at least 10 dB over S9... Unfortunately, on the next go-around, VE3QO was a lot weaker, and on his third transmission he was nearly unreadable.

...

My question is what propagation mechanism is causing this behavior? Is it perhaps the combining of the F1 and F2 layers? If that’s the case, why is the calling station so unusually strong on the first transmission?

OK, I'll bite. I've had similar experiences, not always at sunset. I call CQ -- or answer a CQ -- and signal reports are 59+. After a few minutes, we're both down in the mud. On one level, that's a selection effect. We are more likely to call or be called if the propagation is very good -- we answer the strong stations first. Statistically, you'd expect to start talking to people at a prop. maximum, and things will naturally get worse from there, sometimes dramatically.

My hand-waving "scientific" explanation for both the sunset ("gray-line") and the variability issues comes down to thinking of the ionosphere as a time- and spatially variable propagation medium. We get strong skip when there are (a) strong gradients (acting as mirrors) in ionospheric conductivity (as around sunrise/sunset) and (b) when the gradients have a favorable geometry for a given path. E.g., they tend to lie on ellipsoids that have the two stations as foci, and the ellipsoids are arranged (in Fresnel zones) so that multiple paths arrive in phase.

So it's no surprise that we see a range of signal strengths. Why do we get such a range? We get momentary deep nulls when signals arrive out of phase and cancel. Maybe the question is why do we see the opposite - short periods of very strong propagation? Again, with hand waving, it is reminiscent of the "cusps" you see, for example, on the bottom of a sunlit swimming pool, as the more or less random waves act as lenses that focus light in very bright lines.

Tuesday, March 17, 2009

STS-119 seen from CT, or what?


This is a long posting about what was and what might have been. It's about how hard it is to get data without preparation and without "pro" tools, but how you can dig something even out of poor results.

This is about the NASA STS-119 Space Shuttle launch that occurred at 7:43 pm EDT on the evening of March 15, 2009. This was billed as an unusual opportunity, because it was to take place just as the sky was darkening over the East Coast. The trajectory was to send it into an orbit with 51 degree inclination, to catch the International Space Station. That takes it more or less parallel to the Eastern Seaboard of the US. Would it be visible at Branford, CT? (lat 41d 15m 09s, long 72d 50m 02s W)

Earlier in the day, I heard about the launch and the viewing opportunity. I set an alarm to remind myself, but otherwise did not think to prepare.

The alarm rang, I looked out the window and surprisingly the sky was mostly clear. There was some haze that obscured the stars below about 20-30 degrees elevation, over Long Island Sound. (Fortunately, from my back yard I have a clear view to the south, east, and north over water.)

I connected to NASA TV with my PC using Livestation, and watched the actual liftoff starting. Without much thought, I picked up my new little video camera (a cheap Polaroid DVC-00725F, bought from Woot.com), dimmed my living room lights, and headed out into the cold and almost dark yard.

Without any detailed trajectory analysis that would tell me the apparent track of the Shuttle as it was to come by Connecticut, I stood watching the southern sky for things that move. (It doesn't help that we are under the main approach to Hartford/Springfield airport from the south, but aircraft are generally easy to identify by their flashing lights, etc.)

I fiddled with the vidicam a bit, not ever having tried it for "astrovideography". I thought I knew how to make it work, although there are quite a few problems for this application. It's handheld, of course. It has an LCD screen which is annoyingly bright and very hard to use to point at "stellar" objects. The autofocus feature (no manual setting) makes it likely to go out of focus if you try to zoom. The zoom magnification is not calibrated and so on. I discovered later that the camera has a "night" mode, but there is no documentation of what that actually does. In any case, I did not use it.

A crucial missing piece of equipment, as it turned out, was a timekeeping device, or so I thought at the time. In fact, I now realize the camera has a clock that I am able to calibrate as being about 42 seconds slow.

I noticed something coming north toward me, so I began an HD recording. From the file date and time, I find that it started at 7:52:28 pm EDT. The launch time was 7:43 pm, so our video starts 9 minutes after launch. According to the CBS trajectory prediction, the Shuttle was at about 1,000 miles downrange from Cape Canaveral at that point, about 65 miles altitude, traveling at over 17,000 mph. Google Earth tells me that the distance from Canaveral to Branford, CT is very close to 1,000 miles as the crow flies. That fits, but I have no way to know what track across the sky I should have expected.

Visual

Was it STS-119? Before considering the video, what did I see as a visual observer? It was very fast -- about 2 degrees/sec at closest approach (estimated from video). That's much faster than commercial air traffic at altitude. It was completely silent, and it was brightest as it was moving away from me to the northeast, perhaps comparable to Sirius. It had an unusual extended appearance (to the eye) as it moved past, and it appeared as a very close visual binary object (i.e., double) with perhaps a few arc minutes separation as it moved away. There was no flashing light, as normally seen on commercial aircraft. At the time, I thought it probably was not the Space Shuttle, but now I am not sure what it was.

Sky

The sky at the time was graphed by Stellarium -- extremely helpful for celestial navigation! Here is a partial snapshot of the theoretical sky:



Conveniently, the bright stars Sirius (m -1.45) and Procyon (m 0.4) are separated by about 25 degrees on the sky and were near the track of the "object", providing good calibration points.

Video

The full 42 sec. video is available for download at my aa6e.net site. (10 MB AVI format) I learned more than I wanted about video processing in Linux to get this video into a usable state. It needed to have its brightness and contrast adjusted to bring up the faint images that are just above the noise level. (An upload to YouTube was a failure, since their processing degrades the quality, even in "HD" mode.) Note that your computer may or may not have the power to play this video at full speed and resolution. It is 30 fps, 1280 x 720 pixels, mpeg4 encoded. (My Athlon XP 2000+ is barely able to manage.)

Because of the limitations of the recording, you may wonder if there's anything useful there at all. But there is!

I constructed a timeline for the video:

Start (07:52:28 pm EDT +/- 1 sec)

+ 00 s: Sirius in view
+ 11 s: Object appears faintly about 150 pixels away from Sirius on a vector about 50 degrees clockwise from "vertical". Diffuse with reddish hue.
+ 15 s: Object passes "above" Sirius (vector 0 degrees).
+ 17 s: Sirius passes out of frame, but object continues
+ 21 s: Procyon enters frame (upper left)
+ 23 s: Object passes out of frame. Some structure (non stellar)
+ 35 s: Procyon leaves frame (right)
+ 38 s: Object re-enters frame, binary point source, bottom center
+ 42 s: Recording ends

Closest approach (highest elevation) would have been about 07:52:51.

If you are not able to view the video, I can present portions of a few of the final frames showing the double object.



Analysis

This is pretty crummy data, acquired impromptu with a marginal recording device. Still, given the stellar calibrations and the timing, some information can be extracted.

First, how fast was the object moving? Tracking it as it moved from the neighborhood of Sirius to Procyon, it was moving at roughly 2 degrees/second. If the range is 100 miles, the linear velocity would be over 12,600 mph, depending on the projection angle. If it were an aircraft at 5 miles range, it would be traveling over 630 mph.

What about the double structure? The final video frames show a binary separation of about 4 pixels, which corresponds to 0.08 degrees or 4.8 arc minutes -- given a very sketchy idea of the horizontal field of view being 25 degrees. (A guess based on the apparent Sirius - Procyon separation.) The resolution limit of the human eye is said to be about 1 arc minute, and given that the double object was fairly clear, our estimate of 4.8 minutes is not unreasonable.

If the object were at 100 mile range, what would the separation be? I work out 737 feet! From NASA info, the Shuttle wingspan is 78 feet, somewhat more than the separation of the two boosters. However, if the range were 5 miles, the separation would be 36 feet, also large for typical military jets.

We are left with a mystery. The time and angular speed suggest that we were seeing STS-119, although quite possibly it should have passed lower in the east -- in our haze layer. The fact that the object was brighter moving away than moving towards us suggests rocket engines or a military jet (afterburners?), but the separation of the double point image is puzzling -- in either case.

So it was a UFO... at least until or unless we find an error in calculation (it has happened before) or more data come to light. Do readers have any suggestions?

Update: A number of other on-line sighting reports as STS-119 passed "over" Connecticut, but I have still found nothing more quantitative about trajectory or timing. Was the Shuttle flying over land or over the Atlantic? (Early in the launch, it has to be over water for safety, but probably not this far into the flight.) If it was strictly over ocean, it would have had to fly 50 or more miles to the east, which might not be compatible with my observation.

The surprising separation of the "binary" receding view might (with some hand-waving) be explained if the visible light comes from large exhaust plumes rather than the engines themselves. Also, the separation reported above is likely to be an over-estimate, because I measured those frames that showed a clear separation. Many frames are more muddled, but still show an extended image along the same axis. The average apparent separation might be say 2 arc minutes, and it could even be less, if the video zoom factor was larger than estimated.

Saturday, March 07, 2009

Kindle Hack Prescience

Yesterday's post foresaw the wonderful things that could happen if someone did some "special research" on the Kindle. Well, today's news (Ars Technica) tells of a hacker who has indeed gained access to the K2's Linux essence via the USB port. The original Kindle was hacked way back in 2007.

I don't expect to do this myself any time soon, but it is fun to watch another digital barrier fall.

Friday, March 06, 2009

Kindlemania


This is my 2 cent review of the latest gadget here, the Kindle 2 from Amazon. If you haven't heard about it, it's a hand-size tablet with a 7" ePaper screen, for very high quality black & white display of text (mainly). It is sold as an electronic book reader with a direct sales link to Amazon.

It's really a lot more than that. It is a free wireless web browsing terminal. The browser identifies itself as "Mozilla/4.0 (compatible; Linux 2.6.10) NetFront/3.4 Kindle/1.0 (screen 600x800)". The "3G" wireless link is actually free with no monthly or setup charges, after you buy the Kindle. The browser is labeled "experimental", and it has a lot of the feel of a cell-phone or PDA browser, but the screen resolution is quite acceptable for sites that are reasonably formatted for its size. I can log in and use my gmail account, but I wouldn't say the Kindle is going to replace my PC for general email.

It is a Linux box. You can see that from the Browser ID above, and you can also see it by reading the 70 (!) pages of legal information that are thoughtfully provided in Kindle's memory. There are interesting possibilities: I suppose it won't be long before we have hackers providing a Linux console session. You could use the Kindle as a (free!) wireless adapter via USB for your laptop. And so on. (Of course, Amazon may not smile on such activity.)

So we have a Linux box with ~2 GB of internal memory, a wireless link (Sprint, I think), USB, audio output, a nice screen, and a semi-useable keyboard. It will hold "1500 books", they say. You can convert and/or install your own pdf, doc, text files, etc., but not if they're too large. The only advertised conversion option for non-txt files, is to send email to Amazon where they pass it through the "cloud" (I suppose) and send the result over RF (for $0.10 a shot) or by email (free). But the file size is limited by the email process, ruling out many Google Books image-type PDFs, for example.

What the Kindle is not:
  • Email or IM machine. The keyboard is even worse (in some ways) than my Palm Treo's. There is no email client, but the platform would obviously support it.
  • Color, and the ePaper is too slow to support any animation or video.
  • Open. You have to use Amazon's applications programs, and there aren't many of them (yet). Our friendly hackers may overcome this in time, but expect Amazon to defend their platform even more than Apple and its iPhone, particularly because of the bundled RF link (which could be abused) and because of the "secure" commercial authentication that links you, your Kindle, and your Amazon account. (Where they hope you'll spend a lot of $$.)
  • Secure. There is no password or other protection on the Kindle, which means that anyone who gets the physical unit can cause mayhem in your Amazon account. This is a pretty serious shortcoming IMO.
  • Media Machine. The Kindle will play mp3's and show pictures, one at a time. The file browser function is rather weak (no folders), so you'd get lost pretty soon if you loaded up a lot of small files.
  • Telephone. It would not be a big technical step to incorporate a cell phone (VOIP) into the Kindle concept, but there is no audio input, and the network model might not support phone usage. Still, I don't know how many separate high-function digital devices I want to carry with me. The market will sort this out some day.
All in all an interesting device. I will use it for reading books, newspapers, and a few blogs, while waiting for the hacks to appear.

Sunday, January 18, 2009

Ham Radio in Life Magazine

I ran across some Life Magazine photos that are digitized at Google. Right is an example:

Smitty tinkering with his set, during Radio Ham-Operator's Field Day.
Location:US
Date taken:June 1946
Photographer:Walter B. Lane

You can find more at this Google Images link. So, who was Smitty? He has a nice rig there. Check out the dynamotor for the B+ supply!

Saturday, January 10, 2009

Hams can do that? For $2B?

"State Said to Be Close to Dropping Police Radio Project" says today's NY Times:
State Said to Be Close to Dropping Police Radio Project
Published: January 10, 2009
State officials are close to canceling a $2 billion contract to build a statewide wireless network for emergency agencies, according to officials briefed on the results.

Every once in a while, a news article stands out. I ask myself how much communications infrastructure could you build for $2 B if you asked radio amateurs to work on this problem? (Even if you pay them a fair wage.) That cost is staggering, considering what you could do with a few NVIS-based networks for, say, 1/1000 of the price!

Thursday, January 01, 2009

New Year's Thoughts

Paraphrasing my favorite bookstore bookmark: Danger, the next year may be odd.

Steve K9ZW has a posting that brings home a good question. Is your on-line data secure? (Also, the post from K9JY.) What if your hosting service melts down? Some folks have been able to migrate their blogs and web sites when that has happened. But if you're like me, using a proprietary service such as blogger.com, you may not have that option.

That's an argument for putting anything of long-term value onto a site using industry standard software - preferably open source. But even open source solutions will fade after a while. You can always go back to bare HTML, saving your own off-line backups. (Even then, what are your choices for archival backups?)

It's something to think about.