I’m sorry. This is the fun-vee. The hum-drum-vee is back there.

If you’re a marketer, you might have heard rumblings about a new tool from Yahoo (yes, you read that right) called Genome. It’s basically a product of both its Interclick acquisition and other advertising deals with companies such as Microsoft and AOL, muddled together to help “understand consumer needs, anticipate audiences’ future performance and develop efficient media buys.” It’s basically a way for regular marketing folks to leverage the vast collection of data and in-house analytics of Yahoo.

Personally, I think this is a smart move for Yahoo as we all can agree that they’ve fallen behind on the search and social market. But as an organization who has invested heavily (and actually use) analytic tools such as Hadoop, it’s a great way for them to create a new channel through analytics as a service.

It’s not the first time we’ve seen this type of service. Google Analytics and Google Trends were created for this purpose. But Genome is focused and designed for marketers through providing data from many non-Yahoo sources and even allowing for the integration of your own sources. It’s a huge market, as analytic services that fit within the limited marketing budget are few and far in between.

It’d be nice to see this as the move that saved Yahoo and made them a strong player in an emerging cloud market. As a marketer, these types of services are valuable, and to deliver them with an attractive cost model, it becomes almost critical for smaller organizations to adopt these services in order to remain competitive.

Would you say I have a plethora of piñatas?

As more and more organizations start looking at Big Data analytics as a method to gain better business intelligence capabilities to their large databases, another question seems to come up. The question is how can you transition your current network storage, SANs, NAS etc, to support Big Data when they inherently can’t deal with terabytes or even petabytes of unstructured data. It is from this that we are seeing new methods for dealing with large volumes of data that might indeed result in a new storage platform design. Continue reading

Never argue with the data.

Over the weekend I was chatting with a friend of mine who was asking about what kind of jobs I thought would be in high demand in the next few years. At some point we were talking about LINUX and the “Grep” command, particularly it’s usefulness to find a text variable in a data file. Could it be that Grep was the forefather of Big Data? In either case, I made the point that if she wanted to be in a field that would be in significant demand, Big Data is going to be (no pun intended) big. Continue reading