Source – www.datasciencecentral.com Posted by Vincent Granville.
Guest blog post by Francesca Krihely.
Here’s a prediction and a challenge, rolled into one. Whatever the level of your present understanding of Hadoop, in short, you’re going to hear a lot more about Hadoop in future.
And the challenge? Well, it’s this: whatever the level of your present understanding of Hadoop, you’re also likely to be missing critical pieces of the jigsaw. Which pieces? Read on.
Hadoop, let’s first of all remind ourselves, is an open source data platform which performs a very neat trick. Simply put, Hadoop is a tool for tying together multiple servers into single, easily-scalable clusters, ideal for distributed data storage and processing.
So it’s not too difficult to see just why Hadoop has been so phenomenally successful.
For one thing, by allowing organizations to piece together clusters from inexpensive commodity x86 servers, Hadoop sharply cuts the cost of cluster construction. And being open source, Hadoop not only works well with other open source technologies, but also offers an attractive—and surprisingly affordable—cost of initial acquisition and ongoing ownership.
All of which does a lot to transform the prospects of Big Data within even the most cash-constrained organizations. And so, by happy coincidence, even as Big Data has become all the rage, the price of entry to the party is pretty much open to all.
In short, thanks to Hadoop—and other allied open source technologies—organizations can readily store, extract and analyze data in volumes that would recently been unthinkable. And, what’s more, do it at costs that would recently have been considered unbelievable.
Now, why does this matter? …Read Full Blog Post Here
About The Author: Francesca Krihely is the Community Marketing Manager for MongoDB, the leading NoSQL database. In this role she supports MongoDB community leaders around the globe. She lives in New York City.