payday loans online usa
Future, Geek, Geek Culture, GEGATT, Opinion, Science and Speculation, Technology Culture

The very backbone of the Internet Lies in relationships you know nothing about | GEGATT

internet

photo credit: Patrick Bombaert via photopin cc

Back before the WWW or world wide web as we know it, before HTML and CSS and PHP and the almost universal use of email over the USPS (or other country postal services), we had individuals who ran servers from home you could dial into and get access to the predecessor to what we know today. This was called BBS or bulletin board systems and it was, often, by the grace and permissions (after subsequently sucking up to) both connection and the ability to create or transfer information.

And. It. Was. Slow.

By today’s standards, that is. Actually, it was slow by the standards of the day, but we had no choice and modems were really expensive and the internet as it existed was an extension of the DARPA experiment created as an alternative communications method to telephones and telegraphy and radio and television. DARPA is responsible for a whole host of cool shit. The problem is, as the Defense Advanced Research Projects Administration, the government and the military pay for a lot of things. We don’t know most of them and the ones we do hear about are either too cool to hide or are complete wastes of taxpayer dollars.

However, it is the internet that changed just about everything. Without it we wouldn’t be in a world where information, data, learning, knowledge, and a vast quantity of (often misinformed) opinions are as close as your fingertips. The really cool part of that is we aren’t even past the very tip of possibility. There are a lot of movies that suggest what might be possible, along with the re-adoption of virtual reality goggles, faster internet speeds, streaming content (like videos via Amazon Prime and Netflix), and so much more. At present, we look at the virtual world via the eyes of a flat screen. Whereas, in the future, we’ll see the internet as immersive and full on insanity.

Prior to that, though, we’ll have to accept the reality that is life on the web complete with blogs, YouTube.com, and social media. In order to get to where we are today and, eventually, to the reality of the future we need something we call a server. This word can be confused with the server used within a company network and it probably should be, but then that would nearly negate the whole purpose. The purpose of the server is to store and serve (hence the name) data to a collection of users. The cool thing about typical servers is that it can be configured to allow or disallow access.

One example of this is the ability to block specific IP addresses (on the web) or countries or even computers. Via the server, when configured, you can force access via username and password or through a series of other possibilities. This also leads to the kinds of data being served and how the data are delivered. Back in the early days of streaming media, we’re talking pre-YouTube.com, the plan was to get the media as close to the user as possible. The solution (which, I’d imagine would still be a great idea) is to create appliances at the ISP end that store and deliver the data.

Since distance is always an issue and bringing the data closer to the user is always a good thing, we get to the current golden ticket of web delivery or hosting servers on specific networks. For AT&T, Amazon, and Netflix this means contracting with Time Warner, Verizon, and others to host servers within the network, which is (probably) what is meant by the SEC green-lighting ‘fast lane’ access for some content. For Google this means pre-emptive agreements with some ISP’s. For AT&T, this means buying into other alternatives for getting data across. Where I live, AT&T wireless has to lease data from Verizon thereby causing updates to be somewhat slow since, in order to update bandwidth you have to update data speed, which means updating fiber and hardware.

Google, on the other hand, is attempting to circumvent the whole fast lane thing, or the loss of ‘Net Neutrality,’ is to contract with ISP’s for access on individual networks, or to get the data as close to the user as possible. Consider that Google, Amazon, Apple, Facebook, and almost every other major website and corporation has data centers, to get the data inside the network makes it inherently closer to the user. Having the data outside of the network or the circuitous router routing across the country and around the world causes things to be slower. Add, to that, more priority given to some data over other data via router and server settings and you create a situation where Google and YouTube.com can get to you faster.

I’d think Amazon.com would do the same thing, though – at this point, we know nothing about what Amazon.com is doing or trying to do. Apple too. Facebook. Others. Which takes us back to AT&T and the model it’s use, buying up content providers that also own bandwidth.

Back in the late 90’s, prior to the dot-com bubble burst, one of the growing trends in data transfer wasn’t DSL, but the realization that the copper coaxial cable has greater bandwidth than twisted pair. The outcome, in the 10’s, is a strategic shift from copper to wireless. While AT&T knows the endgame the company is trying for, the reality is that bandwidth and data are an important aspect of every day life and communication.

Which is what all of this comes down to: the ability to accessibly store and serve data.

When looked at that way, my first computer related job was as a telephone support person for a small, local ISP. My job was to help customers with connection issues. Pretty quickly, building and fixing computers was added as a responsibility. This also meant I was building servers for the ISP (half of the building with A/C and lots of computers). Each new server had to meet criteria and, looking back, didn’t match the servers I’d work on both in my server admin phase of life as well as my network installer phase.

Today, a server is a part of a dedicated rack of computers that are both powerful and relatively thin. The rack will contain several servers and each server will hold several hard drives. Those hard drives will be mirrored so that a single drive will not destroy all or any of the data and can be hot swapper or replaced while the computer is running. Pretty neat, if you think about it. Were you to remove your hard drive from you computer while it was still on you’d probably fry your motherboard, the hard drive, and (most likely) your hand.

Consider that data, in all its forms including music and video and pictures (because that’s really what we all get on the internet for) has to be stored somewhere. As storage capacity increases, so too does the Windows OS and the amount of data that can be stored. This also feeds into increased bandwidth, increased file sizes, and the push for high definition over standard definition when purchasing streaming media. Every song on iTunes, video, movie, and book has to live somewhere and that place is, often, a server distributed throughout the world.

In the cases of Amazon’s Web Services, this is even more interesting as Amazon has stated server areas in the Eastern and Western United States as well as most countries throughout the world. While Amazon.com is, perhaps, the biggest store in the entire world, bar none, it is also an innovator and supplier of processor time and server space to a variety of companies and individuals. An example of one company that buys space and processing power from AWS is Netflix. It also follows that Amazon.com and Amazon Prime and Amazon Kindle and Amazon Drive and Amazon Cloud Player are all housed on AWS.

Let’s say that you want to host a website. You’ve got a few options. First, you can use a subdomain of Blogger or WordPress or others. Or, you can build and maintain a dedicated web server, leasing an IP address and the required bandwidth, or you can buy space on a shared server, or you can buy a virtual private server, or you can buy a dedicated hosted server with bandwidth. In all cases, the connecting thread is that a server is needed. Data has to be stored somewhere. Files have to be stored somewhere. IP addresses have to point somewhere. Servers exist for the sole purpose of allowing us to communicate with each other in real-time or near-real-time.

As servers and technology has advanced, the reality that is Facebook has also shifted and we can post and chat and email and video call all at the same time with no single element being left out. Though each element requires a slightly different protocol and, depending on which element, requires a dedicated server or server array to make it possible.

However, a simple static HTML website will still require space somewhere and that somewhere depends on the software and architecture being used. Microsoft, in its bid for world domination, created server and hosting software. Unix is the king and current lord-ruler of internet servers. And Linux is the heir apparent, if Unix ever has a combination massive coronary and embolism and burst aneurysm simultaneously. Even then, from what I understand, most modern internet servers are running Linux because a) it’s wicked powerful and reliable, and b) free.

Basically, if you have anything to do with the internet, regardless of server side or client side, you have to interact with a server. That server will either feed you information or translate your requests so they can be interpreted on the big old WWW. The outcome is a stream of data that is constant, continuous, and nearly impossible to measure. The fact that more video is uploaded to YouTube daily than can be watched in any single individuals expected and completely unrealistic lifetime (excepting, of course, for the Old Testament prophets) is one way of illustrating just how amazing, awesome, and impossible the internet really is.

You. Internet. Server. That’s about it.

Until next week, ciao.

Share this Story

Related Posts

Check Also

You are being lied to about Virtual Reality | GEGATT

I like to call this the Lawnmower Man ...