In the coming weeks, we’ll have a blog post about the recent artificial intelligence day at the AWS Loft (NYC) and what Amazon is doing to democratise access to AI resources. We’ll also be announcing our involvement in a new deep learning Meetup group in Cork (Ireland) In the meantime, here is some food for thought if you’re considering an advanced analytics project (eg. big data, AI, machine learning, deep learning). According to Gartner:
“…through 2017, 60 percent of big data projects will fail to go beyond piloting and experimentation, and will be abandoned.” (http://www.gartner.com/newsroom/id/3130017)
This article from Network World also highlights very similar issues (https://www.networkworld.com/article/3170137/cloud-computing/why-big-data-projects-fail-and-how-to-make-2017-different.html)
Not great odds in anyone’s book! But why is this the case, and what can be done to boost your chances of success?
Certainly, a major factor that influences the outcomes of these projects is the level of expertise and skills applied to the chosen business problem. AI and big data projects are so much more than just deploying Hadoop and hoping for the best. Teams involved need the right mix of business skills (asking the right questions), IT skills (choosing the right infrastructure), and data science/quantitative skills (analysing the data in the right way).
Many organisations try to build capabilities in-house, but this approach is slow and not suited to a quick win, especially important if you want to avoid Gartner’s dreaded “trough of disillusionment”:
(Gartner Hype Cycle - http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp)
Building the skills and tools internally isn’t for everyone, and it’s often not the best way to get started.
Organisations that don’t have the skills, or don’t have enough resources with the right skills, can tap outside firms to ensure a successful outcome. Choosing the right development partner has the additional (and oft overlooked) benefit of building in-house capabilities through knowledge-transfer, exposure to industry best practices, and technical cross-pollination.
Choose the right deployment model
Big data and AI infrastructure projects can be major capital expenditures. Once deployed, businesses want to see a fast and healthy return on these dollar investments. Not necessarily an environment conducive to experimentation, learning, failing fast, and being agile and flexible - all traits that lend themselves to a DevOps friendly practice.
An approach that is far nimbler and offers shorter time-to-value is to make use of cloud-based data warehousing, data-lake, and AI solutions. These offerings are far more cost-effective (if architected correctly), and offer far greater flexibility and scalability. You also transfer the burden of managing the infrastructure onto the cloud provider, leaving you to focus on solving your business problems and not having to worry about change windows, patching schedules, and maintenance agreements.
Plus you’ll be making your financial bean-counters happy, as you are not capitalising your IT assets and depreciating them on your balance sheet. Instead, your cloud costs are shifted to your operational expenses and reflected on your income statement in the period they occur.
We just sent you an email. Please click the link in the email to confirm your subscription!