Archive for the ‘Data’ Category

SAP re-launches the <a href=”http://www.sap.com/solutions/technology/in-memory-computing-platform/hana/overview/index.epx”>HANA</a> (<strong>H</strong>igh-performance <strong>AN</strong>alytic <strong>A</strong>ppliance) platform in 2012 and looks to this as the “game changing” technology for BI/DW/analytics. But is it?

Driven by the corporate demand for real time analytics, the HANA platform seeks to put data into memory and dramatically improve performance. This will help address the demand for big data, predictive capabilities, and text-mining capabilities.

But doesn’t this sounds like the typical rhetoric from computing vendors that previously addressed technology issues by recommending the addition of more CPU, or RAM, or disk space. SAP HANA is delivered as a software appliance focused on the underlying infrastructure for SAP Business Objects. This <a href=”http://download.sap.com/download.epd?context=B576F8D167129B337CD171865DFF8973EBDC14E3C34A18AF1CF17ED596163658ABE46C2191175A1415B54F1837F5F0A13487B903339C6F98″>white paper</a> suggests alot of scoping is centred around hardware and infrastructure design.

HANA makes incredulous claims that traditional BI/DW folks would falter to whisper. The one that stands out is the “Combination of OLAP and OLTP” into the one database. Ouch! Feel the wrath of the stakeholders of business operations. Another claim is running analytics in “mixed operations”. Double ouch!

It’s already challenging enough to get DW/BI solutions deployed without affecting operations. BI folks have constantly advocated separate infrastructure for analytics, with the ETL window  as the firewall between systems. The same ETL window has also created delays for realtime analytics. To advocate moving the BI/DW infrastructure back into operations is going to be a challenge. Yes, it facilitates “closer to real-time”, but its going to be a challenge to make it work politically.

For other BI/DW vendors, this solution would be unfeasible, but because SAP also happens to the largest ERP application platform on the planet, they definitely have a good shot at consolidating their ERP and HANA’s BI analytics. Google, Facebook and the large online behemoths already do it. So why not?!

This is indeed exciting, and its definitely time to take a closer look at SAP HANA.

&nbsp;

&nbsp;

If you thought “Big Data” was already quite unmanageable, IEEE predicts a 1500% (x15) growth in data by 2015. That is 3 years from now.

On a similar scale, IEEE also suggests that terabit networks should be implemented soon to cater for demand in network traffic by 2015. This is up by x40-1000 times from today’s gigabit networks.

This probably also suggests that demand for data processing and delivery will need to increase by a similar scale. To some 10-40 times.

What products and skills will power the delivery of services for “Humungous Data”?

New Data systems – like GFS, BigTables, Hadoop, Hive, MapReduce New Data patterns – No-SQL Cloud computing – A must for elastic computing vs BYO data centres Open data systems skills – unless you plan to pay for expensive database licenses. Web Services – to tie it all together Agile Architecture – often under-rated, but is increasingly important to focus corporate development. Agile Security – also under-rated, but is increasingly important.

With corporations already struggling to manage data growth and demand, will this mean a growth of x15 in data staffing, or will a data specialist have to be x15 times more productive. I believe its a combination of both. New tools will make the data professional more effective. At the same time because of the lack of training and skills transfer, there will always be a need for the human bridge.

 

 

The future is indeed exciting.

The Agile Director <a href=”http://theagiledirector.com/content/4-things-twitter-can-give-business-intelligence” target=”_blank”>recently commented</a> on using Social Media feeds as a form of data to give organisations insight through Business Intelligence initiatives formed on social media. This is very true. If companies realise that their businesses are built on their customers,  all their internal systems should align accordingly. This is applicable to retail, property, media,  communications, telcos, etc.., and the end-results are forward thinking, pro-active, customer-centric organisations. <div>

The Data Chasm represents the gap between those who realise this paradigm. It’s as fundamental as the <a href=”http://www.catb.org/~esr/writings/homesteading/” target=”_blank”>manifesto </a>of “<a href=”http://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar” target=”_blank”>The Cathedral and the Bazaar</a>”.

Data – A large portion of the corporate future will be driven by those who have it, and those who don’t. Then its driven by those who know what to do with it, and those who don’t.

The gap between the haves and have nots is growing, where even governments, and corporations fall under the have nots.

Open data is the way forward to close the chasm. Supplying data alone  is only the first step. As in economics, banking, media, supply chain,  logistics, there are eco-systems of data analysts that churn out  information. But yes, the common denominator across all these diverse  industries is digital media. That is the key to bridge the data chasm.

</div> </div> </div>

Born in 1781, <a href=”http://en.wikipedia.org/wiki/Charles_Joseph_Minard” target=”_blank”>Charles Joseph Minard</a> is noted for his “inventions” in the infomation visualisation. Some of his visualisation include: <ul> <li style=”text-align: left;”>The progress if Napoleon’s Army vs Distance vs Temperature in the Russian Campaign of 1812</li> <li style=”text-align: left;”>The Origin of Cattle destined for Paris</li> </ul> Charles was trained as a civil engineer. <a href=”http://cartographia.wordpress.com/” target=”_blank”>Cartographia</a> has a good list of Minard’s work.

One of the biggest problems of delivering value in a business intelligence project is providing insight around a dataset. Delivering insight about any particular dataset is not about successfully processing the data in question and analysing it. In today business intelligence (BI) world, the expectations are alot higher. Valuable insight is derived from co-relating a particular dataset with sometimes a very different abstract perspective/dataset.

An Example

You have a dataset on radiation levels. (thanks to fallout from nuclear powerstations). A very quick and common question that demands immediate answers would be “What is the impact of increased radiation?”. That is a very broad question, and even with skillful narrowing of the scope of the question, this question still needs to be answered. Even the basic remaining key perspectives on the question may be:

Effect on population? Effect within a radius of 100km? Effect on transportation within 100km? Effect on travel? Effect on tourism? Effect on agriculture?

All these questions will require the custodians of co-related datasets to make their data available. The negotiations to acquire the data would probably take time. Followed by the data modeling, loading and analysis. The final outcomes would still be achieved, but under the strain of time and effort.

We can reduce some of this time by having open data, and configured data. Consider plug and play data. Consider being able to draw data from established datasets with minimal processing, and be able to derive results quickly. This is where Glitchdata would advocate data by convention.

 

 

The OSI Model has been around for several decades now. It remains especially relevant when extending the concepts of n-tiered application design. The application layer of the OSI model, can be expanded into:

The App Presentation Layer The App Web Services Layer The App Business Logic Layer The App Database Layer

As database systems have evolved rapidly over the last decade, we see database systems providing features like foreign key enforcement, indexing, view, triggers, data transformation, fulltext indexing, spatial capabilities, and more.

The problem here that databases start getting bloated, and they no longer focus on the key value that they provide. Data storage and retrieval.

So it stands to reason why Amazons Web Services have offered SimpleDB has its key database offering for Cloud services. Of course they also offer other relational database services.

So why does Amazons prefer SimpleDB? Scalability, and lower costs/GB of data stored.

 

 

Data Warehousing (DW) is a common term used business intelligence (BI) projects and systems. The data warehouse has traditionally been the overhead, a large storeroom which aggregated and staged data from multiple sources into at single point. Analytics could then be conducted on this, and provide valuable insights for management.

Now, the problem with the data warehouse is that its huge, and expensive. The processes to populate the data warehouse consume large computing resources, and the outcomes after a lengthy project might be inaccurate or off-focus.

Within modern applications, and data analytics, we should consider analytics as part of an application’s design, performing smaller analytics projects on smaller datasets before engaging in larger ones. We should also consider incremental processing of data by actively managing data state in a similar way in which we manage application states.

This fits well with the Agile methodology.

So just like abandoned warehouse along the rivers and docks of modern cities, data warehouses will be abandoned with JIT Analytics, Agile BI, and better application designs.

Have you seen a bag full of mustard seeds. Small, little, round seeds that if you accidentally dropped a handful, the seeds scatter on the floor, and roll into hidden, tiny places. More concerning than this, is the ability of a single mustard seed to grow much larger. A bit like Katamari.

Moving data is a bit like moving people. In most organisation, people are frequently involved in the generation, the transformation, the curation, the classification, and analysis of data. And if any of these facets of data management fail, there will be trouble.

The most reliable aspect of such Herculean efforts is the truck, or platform. That is why many organisation prefer to depend on a platform instead of the myriad of parts to make a data project work.

However, most platforms do look like this truck. Rigid, low on flexibilty, and probably not customised for your organisations needs.