Archive for the ‘Vendors’ Category
Talend, a 10-year old software company that specialises in open-sourced data management tools with a subscription-based premium model, raised $86+ million in an initial public offering (IPO). The lead underwriters include Goldman Sachs, J.P. Morgan, Barclays, and Citigroup.
The company said it issued 5.25 million American Depositary Shares at a price of $18 per share, above the $15-$17 range it originally declared, thus raising $94.5 million.
In 2014, Talend CEO Mike Tuchen said that the company could go public “sometime in the next couple of years.” The company trades on NASDAQ under the symbol TLND.
Talend’s competitive edge lies in the value of its data integration products which cost a fraction of tools sold by Informatica, Tibco, and enterprise software vendors like IBM, Microsoft, Oracle, and SAP. Talend customers include AOL, Citi, GE Healthcare, Groupon, Lenovo, Orange, Sky, and Sony.
Last year, Talend generated a total revenue of $76 million. Its subscription revenue grew 39% year over year, representing $62.7 million of the total. The company isn’t profitable: it reported a net loss of $22 million for 2015. In the first quarter of this year, Talend produced a $5.2 million loss on $22.7 million in revenue, up 33.5 percent year over year. For that quarter, 84 percent of the revenue derived from subscriptions; the rest resulted from professional services.
Talend started in 2005 and is headquartered in Redwood City, California. The company had 566 employees as of March 31. Investors include Bpifrance, Iris Capital, Silver Lake Sumeru, Balderton Capital, and Idinvest Partners.
The company offers cloud and on-premises versions of its software, which supports the Hadoop open-source big data software and is based on the open-source Apache Camel.
Kimonolabs started as a Winter 2014 Y Combinator class startup. It recently raised USD5M in 2014, but this hasn’t help delaying their choice to shutter their doors for jobs at Palantir. Pratap explained that the startup has not been able to have the impact it wanted within the two years from launch. So Kimonolabs falls too the wayside where many other web-scaping tools have gone leaving their 125K users in the lurch.
They have given 2 weeks notice to their users to migrate data and services from the platform. The last day is 29 Feb 2016. The absolute last day for API services is 31st March 2016. Your data will be purged and Palantir will not have access to it. If you depend on this service, you will probably be scrambling at this point for alternatives. I am sure that when you assess the risk for utilising a technology like Kimonolabs, you will consider the financial and resource stability of the company.
Meet CAPSICUM Business Architects, a company founded by CEO Terry Roach (Australia) and focused on understanding your business. The approach is to turn it into a digital model using semantics and a custom-built business modelling platform “Jalapeno” thus enabling, facilitating and quantifying change.
This is indeed revolutionary, and CAPSICUM leads where many have failed to map the evolving enterprise.
The Jalapeno tool is a semantic modelling tool leveraging tuples and RDF to describe an enterprise, and it stores this information in a database. It models the organisation from a top-down perspective, maybe a business-centric perspective, or from existing standards. Of course this tool is really as good as its modellers.
Why do I say this is revolutionary? The reason for this, is that the bulk of enterprise architects continually struggle to map the existing enterprise working from the ground up, and often are unable to plan the future state effectively.
Maybe the approach to enterprise architecture has been reactive for most of the time, and largely unable to meet the speed of changing business scenarios. Maybe Business Architecture has a better chance, but maybe Jalapeno has the design to be truely revolutionary.
It’s probably best to map an organisation at the strategic business level, where benefits of change are being considered, and mapping that through the organisation measuring CAPEX, OPEX, actors, structure and cost of change. Gap architecture really.
This all sounds very familiar as an idealistic state of well governed enterprise architecture or business architecture practices, and I’ll definitely be happy to see the Jalapeno platform make further progress.
Periscope Data is a cloud-based business intelligence analytics and distribution platform. Periscope Data has taken the pain out of data loading by directly connecting to your data sources with no messy ETLs.
Periscope visualizes your data into charts, graphs and dashboards. All you need to do is to write SQL queries in Periscope and it returns charts and reports and dashboards that you can share or embed.
Periscope is licensed by the number of data rows you share with Periscope. You can have unlimited users. Your Periscope package includes Unlimited Charts, Unlimited Users, Dashboards, Unlimited Embedding and white-labeling, and Unlimited Support.
Pricing of packages start at $1,000 a month for up to 1 Billion rows of data and scale linearly from there. There is no annual commitment, you can pay month to month.
You can take advantage the Periscope caching tool at no additional cost. Caching reduces load on your database, results in faster performance and gives you the ability to upload csv’s and do cross database joins. Your query speeds will run 150x times faster with Periscope caching.https://www.periscopedata.com/ http://wiki.glitchdata.com/index.php?title=Periscope_Data
GoodData is a company that has been on the data scene since 2007. Founded by Roman Stanek the former CEO of NetBeans, and Systinet, GoodData seems to be in good hands. Roman Stanek sold NetBeans previous to Sun Microsystems in 1999, and Systinet to HP 2006.
GoodData has raised USD53.5M in venture funding from the likes of Andreessen Horowitz, Tim O’Reilly, AlphaTech Ventures, General Catalyst Partners, Windcrest Partners, Intel Capital and TOTVS. It employs 291 staff across 5 offices in Prague, Brno, San Francisco, Portland, and Boston.
GoodData has a joint venture with Chris Gartlan to grow an APAC presence. Based in Melbourne-Australia, GoodData APAC has a team of 10 staff focused on growing the business.
So what is the GoodData value-proposition? It’s simply a fully managed cloud-based business intelligence platform. GoodData does it end-to-end taking on the capital costs of building data-warehouses and data-marts and providing speed and agility in delivering results.
These results are actionable insights which under traditional data integration would cost anywhere from 7x to 15x. So whether you run lean OPEX or CAPEX, this solution can be tailored to your requirements.
Agility comes in the form of a managed solution. Business Units can now independently build datamarts, and visualise data. This is where cloud-based BI performs.
So what are GoodData’s strengths with such a broad focus across a very big data chain? Customer-focus seems to be the key. Even with a fully out-of-the box solution, GoodData is agile enough to custom-fit various parts of the datachain. This could be data integration, data storage systems, to visualisation components.
Outsourced cloud-based BI is the new spin on the disk.
If you haven’t heard of Yellowfin BI, it is a passionate startup focused on making Business Intelligence easy. Established in 2003, Yellowfin has been developed to satisfy a range of BI needs, from small businesses, to massive enterprise deployments and software vendors.
Yellowfin makes a Business Intelligence platform built ontop of Tomcat/Java that processes and presents information in refreshing detail. Its easy to assemble, and allows you to focus on building new business value rapidly. Yellowfin can be deployed on any server (cloud or on premise).
Yellowfin is the second Australian vendor to ever get in the Gartner Magic Quadrant.
Growing organically, it can barely be called a startup these days with >100 employees and offices in 4 different countries. Yellowfin is running a series of presentation of their technology in December. These are:Melbourne – 1 Dec Sydney – 2 Dec Auckland – 3 Dec
Register for the event today!
Talend has started leveraging Apache Spark as part of its big data integration platform. Spark leverages the speedy in-memory execution capability to accelerate data ingestion. Migrating to Apache Spark can provide performance improvements from 5 to 100 times.
Talend promises to make the migration literally as simple as the push of a button with a new refactoring option that can automatically convert data pipelines written for MapReduce to Spark. MapReduce was the previous leader in high-performance data integration. That theoretically requires no changes to the high-level workflows that a user has defined for a cluster.
New projects also benefit from the upgrade, which brings some 100 pre-implemented data ingestion and integration functions that make it possible to pull data into Spark without having to do any programming. According to Talend, the result is an up to tenfold improvement in developer productivity.
There are a number of new Talend features, with the biggest additions being “masking” or also commonly known as Tokenisation. This allows an organization to replace a sensitive file with a structurally similar placeholder that doesn’t reveal any specific details. That’s useful in scenarios where, say, an analyst at a hospital that doesn’t have permission to view patient treatment history wants to check how many medical records there are in a given dataset coming into Spark.
Gartner is hosting the Business Intelligence & Information Management (BIIM) Summit. A must attend event for BI/IM folks who are looking into the evolution of BI/IM to provide value to their organistions. What is interesting are topics like “the last mile for BI” and the move to “predictive and prescriptive” data.
Two thoughts come to mind when I look at the details of this conference.For alot of organisations, they are very very far from “the last mile”. Without methodical and data-centric approach to BI, many organisations are stuck in no-man’s land. The chaos of semi-defined masterdata, a plethora of link tables/translation tables, poor definition of transactional and analytical datasets, half-fuel performance at all levels of the BI stack, and probably significantly under-performing BI. Naturally business owners want to get value from their BI/IM investments. Traditionally, this has been considered an important and justifiable cost to the organisation especially for decision making, but its increasingly important to extend that capability into future scenarios. Future scenarios involve the unknown. Unknown datasets, unknown parameters, unknown visualisations, unknown BI/IM capability. These are the topics that I would like to see from this conference.
Anyway, I’ll be attending the conference. Email me if you want to meet.
Conference TracksTrends and Futures Information Innovation Performance Management Social and Big Data Virtual Tracks
Informatica (INFA) issues warning on weakness in Q3 results, reporting guidance of USD189+M, a drop from USD200+M (5% drop). The stock market has further punished the company with stock prices dropping from USD45 to USD 30 (33% drop) over the last 6 months.
Informatica blames the weakness on Europe, but could it be that the value of its core business of “data transfer” is being eroded by open source equivalents like Talend. This is definitely a lower cost alternative for open source oriented European companies.
SAP re-launches the <a href=”http://www.sap.com/solutions/technology/in-memory-computing-platform/hana/overview/index.epx”>HANA</a> (<strong>H</strong>igh-performance <strong>AN</strong>alytic <strong>A</strong>ppliance) platform in 2012 and looks to this as the “game changing” technology for BI/DW/analytics. But is it?
Driven by the corporate demand for real time analytics, the HANA platform seeks to put data into memory and dramatically improve performance. This will help address the demand for big data, predictive capabilities, and text-mining capabilities.
But doesn’t this sounds like the typical rhetoric from computing vendors that previously addressed technology issues by recommending the addition of more CPU, or RAM, or disk space. SAP HANA is delivered as a software appliance focused on the underlying infrastructure for SAP Business Objects. This <a href=”http://download.sap.com/download.epd?context=B576F8D167129B337CD171865DFF8973EBDC14E3C34A18AF1CF17ED596163658ABE46C2191175A1415B54F1837F5F0A13487B903339C6F98″>white paper</a> suggests alot of scoping is centred around hardware and infrastructure design.
HANA makes incredulous claims that traditional BI/DW folks would falter to whisper. The one that stands out is the “Combination of OLAP and OLTP” into the one database. Ouch! Feel the wrath of the stakeholders of business operations. Another claim is running analytics in “mixed operations”. Double ouch!
It’s already challenging enough to get DW/BI solutions deployed without affecting operations. BI folks have constantly advocated separate infrastructure for analytics, with the ETL window as the firewall between systems. The same ETL window has also created delays for realtime analytics. To advocate moving the BI/DW infrastructure back into operations is going to be a challenge. Yes, it facilitates “closer to real-time”, but its going to be a challenge to make it work politically.
For other BI/DW vendors, this solution would be unfeasible, but because SAP also happens to the largest ERP application platform on the planet, they definitely have a good shot at consolidating their ERP and HANA’s BI analytics. Google, Facebook and the large online behemoths already do it. So why not?!
This is indeed exciting, and its definitely time to take a closer look at SAP HANA.