Since Amazon launched their cloud computing business AWS ("Amazon Web Services") in 2006 they have been striving to broaden
their offerings. By broadening
I mean allowing greater and greater access to the cloud whether that is geographically, or from an ever increasing range of devices. By deepening
I mean allowing for a greater complexity of services provided. This has been part of their strategy to become the premier cloud computing company. It's safe to say (for better or for worse) that Amazon has succeeded in their strategy.
This year's re:Invent conference is highlighting the depth of Amazon's offerings in the Machine Learning and Business Intelligence space. In April, they launched Amazon Machine Learning
and yesterday they launched Amazon QuickSight
. The conference is also hilighting the area of Deep Learning
with several sessions devoted to the topic, though Amazon has yet to provide its own Deep Learning service. Amazon is clearly trying to position itself as the place to go when you want access to analytics but don't have sufficient resources to create your own data analytics pipeline.
Wednesday, I attended two sessions on the Amazon Machine Learning offering. In the first session, Amazon attempted to enable participants to create their own personalized restaurant recommendation website (like a cross between Yelp and Netflix) from scratch. Though ultimately the workshop was not sufficiently organized to succeed in its objective, we got really close to creating a fully functioning demo. The second session hilighted an application that could listen to a Twitter feed and automatically decide whether or not a particular tweet deserved further attention from customer service personnel. Amazon is streamlining the machine learning process by not only providing the hardware needed to run intensive computations, but by also providing algorithms, analytics tuning, and a service for running the discriminator and taking action on the fly. They even provide "mechanical turks
" for intelligent sample labeling. This reason that Mechanical Turks are necessary is that Artificial Intelligence -by definition- attempts to simulate Human intelligence, and this can't be done if you don't put real people in the process.
For example, to create an automatic Twitter feed router one would:
- Collect a large set of tweets that the learning algorithms will train on
- Using Amazon Mechanical Turk, have each tweet individually categorized by a real person
- Using Amazon Machine Learning and the categorized tweets, compute a model which will use be used to process future tweets
- Using the Amazon Machine Learning console, fine tune the model's predictions to find the best balance of false negatives and positives for the particular application (in this case, a false negative is much worst than a false positive because a false negative implies a customer issue that is never addressed)
- Using Amazon Lambda, create a predictor which will follow the live Twitter feed, evaluate each tweet to see if it needs to be routed to customer service, and if so, send the tweet by text message to an appropriate agent
Amazon is demonstrating the future of intelligent computing. It's exciting, it's scary, and it's highly applicable for our very own Data Dashboard