Thursday, March 6, 2008

DNS destination

So, things have slowed down a bit in terms of side projects over the last few days. We are still waiting on the A/C, the new ethernet switch, and for the two post rack to be mounted on the cluster front. I have a bit of work that I need to get done on my thesis (which I will actually be posting about later today hopefully) and in general, everything else is running smoothly. Sadly, that doesn't generate much material for posts, but there is perhaps something of interest in all of this.

We recently decided to bring a few sites that we maintain which are hosted at shared hosting providers 'in house.' This is a tedious process of freezing, backing up, copying, deploying, reconfiguring, testing, and then updating DNS data so that the website hits the new server (at least eventually). At one point, we used to do our own DNS hosting, but while convenient for some things, this actually proved to be a very inconvenient strategy at some point. Instead, we now use Nettica, which has proven to be very reliable.

Anyway, check back later for hopefully some more interesting content.

Tuesday, March 4, 2008

Pausing to reconsider

Several days ago, I posted that I was looking to displace my linux based router firewalls with 'enterprise' appliance like solutions. Let me start again. A lot of this discontent was fostered by an unstable box in a very critical position, that had a habit of going down when I needed it to stay up. Since I spend most of my days these days 132 miles away from that box, I was somewhat forced to stick to the plan of rebooting it (remotely) when trouble cropped up. This experience made me very bitter, because every time this machine went down I lost the confidence of those who were relying on services that were dependent on it. Eventually, I began to wonder if I was doing right by my customers by using a more versatile and less expensive solution that seemed to be less reliable.

I have come to my senses. Business owners often need to come to the realization at some point that spending money does not increase customer satisfaction. Just because there is a more expensive option that is better marketed, does not mean that you should question the validity of your original strategy. I have to remind myself of this sometimes as well, as I had to in this case. A linux machine with a custom 2.6 kernel, coupled with systems like dhcp3-server, bind, openvpn, ntp (server), of course iptables, built in VLAN support, and any expansion card that can fit in a standard expansion slot, blows almost anything else out of the water in terms of features, and certainly in terms of price. Many of these features are essential to providing a high quality and reliable service. The hardware is really no different than that which runs in any of the leading 'appliance' solutions either. It's all about the software, and with Linux most of the time, that comes down to your ability to intelligently configure it.

It somehow seems appropriate that the pizazz of good marketing is very compelling until you try to justify your persuasion with numbers and common sense.

-AJB.

P.S. Pictures make stories better
While I am often reminded that slides for talks are best without any text on them (a theory that I debate to this day) I do recognize that Blogs are better with images. I think part of it comes from my own desire to look at random images of cool high-tech equipment (try google image searching for things like 'core switch' or 'fiber' one of these afternoons) and share my own pictures with others. Part of it also comes, I think, from a desire to share which somehow always seems more genuine when it involves images.

Pictured at the top to the left is the front of a Cisco PIX-501. It has been sitting in a box for the better part of two years, and I only recently broke it out when I was considering replacing one of our linux routers. I had to go through the process of flashing it to wipe out the enable password (which I could not remember for the life of me) but from there on out it was smooth sailing. I even drew out a nice diagram of how it would work in my revised network layout at that site. I have some other neat stuff coming in which I will photograph at my earliest convenience, as well as a few other images yet to post 'when time permits.'

Monday, March 3, 2008

Phases of understanding

I am not an electrical engineer, nor am I an electrician. By all rights, I am far from qualified to comment on polyphase power, but I have to throw in a few words. Based on my limited understanding of three-phase power, I can surmise that you are provided with three hot leads, each of which carries alternating current electrical power at the same frequency, but with shifts, or phases. Typically, for three phase power (or, perhaps always) this means a shift of 120 degrees between each of the three phases. Now, if you have two 110 hot leads which are 180 degrees apart in waveform, then you have a traditional US residential power potential of 110 to 220 volts (depending on the power of the two hots). However, with three phase power, you actually have 120 degrees of separation between the waveforms, not 180, meaning that you have sqrt(3)*(potential of hot), not double, if you're using two hot poles. (image is from Wikipedia)

So, what got me started on this? At one point last week, I had the opportunity to ask if we had 'three phase power feed, and thus 208v of potential' for the power feeds to the bladeframes. The response that I got was to the effect of 'we don't have three phase power, it's 208v.' Since the Liebert unit (and it's monster compressors) run on three phase power invariably, and since that is really the only normal explanation for having 208v of potential, I had to run out and ensure that my limited EE knowledge had not failed me. I can now rest assured that it hasn't.

-AJB.

P.S. More pictures
I've also, as you might have guessed, taken far more pictures then I've posted to this blog (maybe you
wouldn't have guessed that). Our upload bandwidth here at trinity is rather limited, and I don't have time (even batch processing time) to reduce the resolution of the images that I've taken, so, what I'm saying, is that they take forever to upload. But, since I have a few minutes now, I'll throw up a few more.

This first image is utterly insignificant except that it's a Halon 1211 fire extinguisher. These relics should not be used without a breathing apparatus, but it sure beats a bunch of wet computer hardware. The next image goes well with the theme of water on electrical equipment: it's our EPO button. Something about seeing them just forces me to remind myself "look with your eyes andrew, not your hands." I have similar thoughts when getting on ski lifts. It's so unfair that those damn lift operators get to hit those huge buttons and we don't! I might have watched a bit too much MacGyver when I was little. The third image is of a note attached to the upper compressor in our Liebert unit. I don't understand really anything about the way these things work, but I am unsettled by a note saying that something was almost stripped off in 1995. This thing needs to work for at least as long as our new cluster is useful.


The first picture here on the left is a reminder that 61A was a telecom closet (in many ways, it may very well still be). This wall clearly had a lot of stuff mounted on it at some point. The blue wire coming down from the ceiling is a 50? conductor cable that is going to be terminated in the 2 post open telco rack to provide our drop. That black wire coming out of the wall is actually a 12 fiber bundle, which at the very least is disused, but at least two of the fibers appear to be totaled (their ends are cut off). The picture on the right is of a bunch of APC units that we removed from the back of the bladeframes. My understanding is that these things are used for redundant power legs, or possibly redundant UPS units (is there a serious effective difference?) We don't have that many circuits, or that much money (how much would a UPS like that cost anyway?) so these are pending removal off to some other location, like the computing center, or ebay. Oh! I almost forgot. You can just barely see the top of a digital vt320 dumb terminal in the left image. Dunno what's going to happen to that yet.

P.P.S. My post seems to have changed to arial without asking me. I must be hitting tab too much.



Sunday, March 2, 2008

As promised


I managed to get a few more hours in with the cluster this afternoon. The Myrinet box has been safely removed and stored accordingly, and the SAN has been mounted.

Contrary to what I had said earlier, we actually have 7 250gb drives. There isn't really a point in trying to configure the system yet, as we don't have the necessary patch cables to connect the FC switch to the bladeframes and SAN, but they should arrive sometime this week. I also want to add, that this was by far one of the best designed rackmount kits I've ever worked with. What's more, it came with all of the right hardware so everything just worked right out of the box.

-AJB.

Everything that's old is new again

So, I just spent the morning replacing a linux computer that functions as a router with another linux computer that will function as a router. Sure, it appears on the surface that you're saving a lot of money by doing this. It's almost impossible to compare features between a linux system and a commercial hardware router. Then again, after doing this for a few years, and putting a few of these in mission critical applications, you stop looking at them as routers, and start thinking of them as 'the most important server pending the most inconvenient reboot or panic.' Needless to say, I have deployed many a high end hardware router in many an application, but the few linux 'routers' that are left are probably going to be getting the boot sometime soon as I simply cant have downtime as a result of cost savings. I'll try to fully justify this using numbers shortly.

-AJB

Saturday, March 1, 2008

Thoughts on Mainframes

A few days ago the NYTimes ran an interesting article on IBM's new mainframes, and apparently increasing sales. The article and the discussion of mainframes is cast pretty strictly in terms of virtualization. The real problem, is power.

Anyone who has worked in a modern datacenter can tell you that typically half of your electrical consumption (possibly a little less) ends up being consumed by the AC units required to cool the other half of your electrical consumption. Rackspace is very expensive nowadays (depending on where you look) simply because everything else has suddenly become so cheap. I can get a brand new 2U server installed in a rack for only a few k, at the most. At that point, it's usually only a few months before the combined forces of depreciation and the high costs of power, bandwidth, and rackspace, cause the process of provisioning and maintaining a home for that server far exceed the costs necessary to procure it. This is all obvious, and would not be at all surprising to anyone who has ever colocated anything. But that's really my point here. Virtualization isn't a neat technology, it's a method to save money.

The simple costs of shoving a bunch of computers into racks that will have anything less than a high level of utilization (i really like to think of this as throughput, but that's just due to my background) is a huge and unjustifiable waste when solutions like VMWare are available. Balmer I think misses the point when he tries to shrug off VMWare as premature and hard to use. He makes the same essential mistake with linux. What's worse, is that he yells at no end about minimizing the total cost of ownership with regard to both of these products, which is again, exactly the point. IT people are not ignorant of costs. If anything, IT departments are yelled at more than others to pull their weight in terms of expenditures. Innumerable metrics exist to judge a companies performance based on the size of its IT overhead coupled with its customer satisfaction. The use of technologies like VMWare (and it's actually impressive that 5% of the servers out there are virtualized on this platform) represents the most efficient cost savings measure made available to small and medium sized businesses in at least the last four years.

The fact that some technicians and managers might have to put in a few late nights to get everything working exactly right is by no means a deal breaker. It's an opportunity for people to feel like their pulling their weight and really adding to the bottom line.

And so, in some kind of crazy and roundabout way, we have come all the way back to mainframes. If you can afford at least a few of these things, and you're already looking to spend oodles of money on new VMWare deployments, then it certainly appears to make sense on the surface, even just going on the limited information provided by our articles here. When you factor in that some of these machines are some of the most reliable that you can buy nowadays, that's even better. Finally, to know that when you buy a piece of hardware like this, you have a lot more than a 'certified partner' at your disposal. IBM doesn't like to lose customers. Microsoft, doesn't seem to mind sometimes.

-AJB.