Keith Smith - Think Ahead. Learn More. Solve Now!

Journal of thoughts

Keith Smith - Think Ahead. Learn More. Solve Now! > Journal of thoughts

My Thoughts on Docker

Tuesday, June 30, 2015 - Posted by Keith A. Smith, in Journal of thoughts

Docker uses Linux LXC to encapsulate a fixed environment into which you have built some software that depends on a stable config and wants isolation from everything else. To the software it feels like it is alone on a machine, but actually it is alone in what Docker calls a container. You can have 100s to 1000s of containers running on one machine. You can also group containers together to make larger projects. Obviously with the encapsulation, you can patch or upgrade the OS without any fear it will break something running in a container. Unlike VMware the encapsulation is not on the chip level with a hypervisor, but on the OS level. So those big servers you have, the ones that can easily run heaps of things, but you don't really want to have heaps of VMs? (which really just passes the update/patch/reboot buck, if you think about it), these can run heaps of Containers. In my opinion Docker reminds me of bsd 4.0 jails, developed in 2000 for a hosting company which predated Solaris zones.

Docker has many concerns, particularly around security, which in turn can be a gating or otherwise limiting factor in acceptance by several industries. This is the nature of open source - there are many options as every individual that disagrees with someone else spawns his own solution addressing what he views as the most important problems. In the end there are many container options, even just in the case of linux. Companies that have embraced container-based virtualization more often have more than just one such technology in place. This year's openstack summit showed this strongly. I do see great potential with containers. One of the caveats i am facing right now when designing my potential future architecture, it is redundancy/availability. There is no live migration of containers. So you have to consider tthat. You would have redundant containers, but i can see where IP addressing can get a bit complicated when using keepalive or ucarp. And this is because they wouldnt work at the container level, but at the docker host level. If you lose a container, the virtual IPs wouldnt be active on the other host. And docker uses its own network addressing for the containers. Therefore, essentially each docker host is a "router".
View Comments 0 Comments
Share Post   


The start of the madness

Friday, August 29, 2014 - Posted by Keith A. Smith, in Network, Xen, Journal of thoughts

After deciding to cut the cord in February of 2014 I thought I should build a network to support our entertainment needs. I cancelled our FIOS tv service because of the annual rate hikes and went internet only in order to save more $$$, besides we didn't watch a whole lot of tv and when we did it was only certain channels.  After killing the tv service i was to negotiate a bump in the bandwidth from 25/25 to 75/75 which was much needed. I started by purchasing a box of CAT6 and since i already had the other items (e.g. connectors, crimper, etc.) I made a weekend project out it. I put in drops in every room and in a few other areas which was a pain to get to, those areas were costly because i put holes in the ceiling while in the attic. Next i purchased the 1513+ synology nas for about $842 from amazon in july of 2014, I got it diskless because i didn't know what drives i wanted to put in it at the time. I settled on 5 of the Western Digital Caviar Green 3 TB SATA III drives which ran about $674 from tigerdirect.

At this point i had to make a call on what switch and new firewall i was going to use, i thought to go cisco and grab a 3750x along with an ASA 5510. That never happen because IOS requires you to have SMARTnet to download the bits now, so with that i moved on to HP (which used to be 3com) and i used those switches before and they worked great. I managed to fined a 1810g procurve managed switch from amazon for $169, i then started doing some research on firewalls again. It now was down to juniper,fortinet and sonicwall, i always liked sonicwall along with juniper but sonicwall was still more than what i wanted to pay and juniper seemed limited on throughput in the price range i was looking in. I checked out fortinet but i still wanted to find something else to compare it to, i somehow stumbled upon the watchguard line.

I did some deeper internet research on the watchguard products and i liked what i saw on them. I managed to find a demo of what the web interface was like from a management stand point and i was sold on it, at that point i started looking for models and prices for watchguard. The T10 ended up being the one i was willing to start out with, i purchased it from Newegg for $200 and the license from cdw for $60. All the network gear arrived on a Friday which was perfect because i would have time to get it all setup over the weekend, i started with the firewall thinking it would be the fastest to setup. I was wrong on that thought....i setup the rules that was needed along with the vlans on the 1810g, the main issue was that nothing had outbound access to the internet. I tinkered with the rule base for hours, i then came to point where i knew i had setup everything correctly and the cause had to be something else. It was late (around 2am) i went to sleep because i was out of ideas at the time and kids were driving me nuts because they couldn't watch tv thanks to me.

I woke up around 7ish to get back at it, i finished the config on the switch and i was sure that i setup the firewall correctly but still no outbound traffic was allowed. I did a lot of internet research but didn't find anything that really helped, i proceeded to review all the docs that came with the T10 again to see if it was something that i missed. At this point it was around 7pm Saturday and i was able to find everything i needed to call support because i had a thought that perhaps this device needed to be activated before use. After speaking to support i was right, they have a live subscription that needs to be activated so we took care of that and bam outbound internet access. It's always the small things that cause the bigger issues, once that was resolved i was able to bring all the amazon fire tv's up along with the wi-fi.

Now that the internet was up i could move to the NAS. I setup the 1513+ synology with the 3TB drives i bought and setup the lacp along with the bond, that was pain mostly because of the way i setup the interfaces on the switch. For some reason the 14, 16, 18, and 20 were apart of trunk4 but the trunk it self was untagged and the ports were still tagged. I removed the ports from the trunk then made sure they were on vlan4 and untagged, then i put them back into trunk4 as members with LACP and it works like a champ 4GBPS on the throughput. After that i migrated all my data from all the "cloud" services, once that was done i enabled some of the sync features so i could get the things i needed while on the go.

The next thing i figured i would work on would be the wifi service improvements, my old cisco/linksys router wrt350n was due to be relocate to light duty since it was the edge gateway/router/wifi ap. I started looking around for the newest wifi routers out on the market, for me it came down to the Asus's RT-AC68U and the netgear nighthawk triband router. The features were about the same so it came down to price, i went with the Asus's RT-AC68U from amazon for $199 and i haven't looked back since. I used the default merlin firmware that came with the Asus's RT-AC68U but it couldn't achieve all that i wanted so i ended up flashing it with dd-wrt which i had used before on previous devices, i was able to setup my hp printer on it so we could print wirelessly but i could get the guest network setup work as i needed it to.

The guest network was not stable and it was really because of a bug in the dhcpd, after doing much testing and research i found that it was some sort of issue with the dhcpd on the version of dd-wrt i was running. Enter the wrt350n once again...this time i set it up on its own vlan to for guest wifi devices that needed internet only, this way i could have a proper "guest network".

A few months went by then i started working on things again, i purchased a tv/wall mount kit for my mancave and setup my xbox along with a mac mini for entertainment. I also got a few dell optiplex 780's that had been retired from work, i setup xenserver on those and connected them to the 1513+. I started looking at the core of the network and thought well i should buy a rack now so i can organize everything because everything worked but it was an eye sore. I didn't want a 42U rack because i knew i would never have that much gear, i found a neat little Tripp Lite SRW12US 12U Wall Mount Rack Enclosure Server Cabinet on ebay. The specs were perfect on it

Height    25"
Width    23.6"
Depth    21.6"
Rack Width    19"
Rack Height    12U

They seemed to sell in the $400 range on ebay and amazon, which to me seemed to be a bit much for a 12U rack. I spotted one on ebay which was in bidding state, i snipped from everyone at the last minute for $132. At that price it was a total steal and it came with the case nuts along keys for the doors. I bought a universal rack tray to sit the nas on, i also bought another 2gig module for $50 for the 1513+, wire organizer panel $18 and a rackmount PDU for $40 all from amazon. I re-wired all the cables for everything that was close and connected to the 1810g, then i installed everything into the rack. It was sort painful at the time of doing some of the work but end the end it was all worth it and looking back i would even say that it was fun, the next and thing i have on my list is to obtain more powerful servers that will be my next set of hypervisors, i thought to build my own but it looks like it cost around $2000 or so to do that. I have moved on from that idea and looking at used servers that will have enough resources (CPU & RAM) to support the vm's that i want to run, the tough part is finding enterprise type servers that will fit in my small rack.

I started looking at older sun and apple servers on ebay because they were cheap, i had a thought to check the HCL for xenserver to make sure this was going to work. I found out that other people had managed to get some versions of xen on to sun and apple servers but i didn't want to chance it, i did decide to use the HCL as a guide that could help me find me next set of servers. I started looking at the dell models and checking out the chassis specs to make sure that the server would fit in the rack, i found a poweredge r210 which looked like it would fit the bill. I ended up buying a 2 of the poweredge r210's and more ram to max them out at 32GB each, after receiving them i went ahead and unpacked them. Anytime i order a used server i check to make sure everything is seated properly (e.g. ram, processor, etc) so far so good, so i rack them and proceed to power them on so i can get an idea of just how noise these servers are going to be together. I let them run for a few hours and i determine that they aren't as loud as a normal 1U server would be, but still a bit too noisy for my liking, so i power them off and un-rack them so i can inspect the fans because they are always the culprit for noisy servers. I did notice that one of the servers was slightly noise-yer than the other, upon my 2nd inspection i notice that they have miss matching fans in them so i decided to order more and remove 1 fan from each. The servers run very quitely now, which is exactly what i wanted.
View Comments 0 Comments
Share Post   


The end of the IT Department

Saturday, July 09, 2011 - Posted by Keith A. Smith, in Journal of thoughts

When people talk about their IT departments, they always talk about the things they’re not allowed to do, the applications they can’t run, and the long time it takes to get anything done. Rigid and inflexible policies that fill the air with animosity. Not to mention the frustrations of speaking different languages. None of this is a good foundation for a sustainable relationship.

 

If businesses had as many gripes with an external vendor, that vendor would’ve been dropped long ago. But IT departments have endured as a necessary evil. I think those days are coming to an end.

 

The problem with IT departments seems to be that they’re set up as a forced internal vendor. From the start, they have a monopoly on the “computer problem” – such monopolies have a tendency to produce the customer service you’d expect from the US Postal Service. The IT department has all the power, they’re not going anywhere (at least not in the short term), and their customers are seen as mindless peons. There’s no feedback loop for improvement.

 

Obviously, I can see the other side of the fence as well. IT departments are usually treated as a cost center, just above mail delivery and food service in the corporate pecking order, and never win anything when shit just works, but face the wrath of everyone when THE EXCHANGE SERVER IS DOWN!!!!!


At the same time, IT job security is often dependent on making things hard, slow, and complex. If the Exchange Server didn’t require two people to babysit it at all times, that would mean two friends out of work. Of course using hosted Gmail is a bad idea! It’s the same forces and mechanics that slowly turned unions from a force of progress (proper working conditions for all!) to a force of stagnation (only Jack can move the conference chairs, Joe is the only guy who can fix the microphone).

 

But change is coming. Dealing with technology has gone from something only for the techy geeks to something more mainstream. Younger generations get it. Computer savvyness is no longer just for the geek squad.

 

You no longer need a tech person at the office to man “the server room.” Responsibility for keeping the servers running has shifted away from the centralized IT department. Today you can get just about all the services that previously required local expertise from a web site somewhere.

 

The transition won’t happen over night, but it’s long since begun. The companies who feel they can do without an official IT department are growing in number and size. It’s entirely possible to run a 20-man office without ever even considering the need for a computer called “server” somewhere.

 

The good news for IT department operators is that they’re not exactly saddled with skills that can’t be used elsewhere. Most auto workers and textile makers would surely envy their impending doom and ask for a swap.

View Comments 0 Comments
Share Post   


Page  <12