Artificial intelligence or AI is all around us and it’s certainly more than just a buzzword. Tech companies have already improved their products thanks to advancements made in the field of AI. But for Microsoft, simplifying AI is the ultimate key to success.
“AI is still not as easy as it should be, there are still complications,” Bharat Sandhu, Director of Artificial Intelligence at Microsoft, tells Gadgets 360 during a one-on-one conversation at the sidelines of the Microsoft Buildconference in Seattle. “The number one goal for us is, thus, to make it really simple and make the developers and data scientists more productive.”
To make things easier and simpler for the people specifically who don’t have coding skills, Microsoft showcased tons of new developments at Build, including the addition of automated machine learning user interface and a “zero-code” experience as the two major updates pertaining to the Azure Machine Learning service.
Advancements in areas such as vision, speech, language, and decision-making capabilities were also announced at the annual developer-focussed conference. Microsoft also showed off developments such as adaptive dialogues and a language generation package for the Azure Bot Service.
In addition to the tools designed for the non-developer audiences, Microsoft introduced Azure Machine Learning notebooks that are touted to offer a “code-first” machine learning experience designed for developers and data scientists. The company is also trying to attract the customers who are beginning with machine learning concepts but have little knowledge about coding. For them, Microsoft is claiming to offer a “drag-and-drop” experience.
Sandhu highlights that the purpose behind various announcements that took place at this year’s Build is to drive the adoption of AI “much more rapidly” than ever before.
“If I look back two years ago, customers were asking me what can I do with AI,” Sandhu says. “Now, they’re saying I know what I need to do but can you help me do it faster.”
Aside from simplifying the AI adoption, Microsoft wants to make sure that its AI solutions are being deployed at scale. The company has MLOps capabilities and model interpretability capabilities to help business users design their AI models at scale.
“We never look at your data”
Developing an AI model generally requires a large amount of data. However, Sandhu underlines that Microsoft takes security and privacy of user data quite seriously. “We have a very simple approach in Azure — we never look at your data, it’s your data. We never use your data to improve our algorithms or models,” he says.
But if Microsoft is not using the data it receives from the solutions it offers, then how it is set to train the systems to perform certain operations? Sandhu answers this in a simple way. “What we do is we buy datasets,” the executive tells Gadgets 360.
“Getting more data is not always good because when you get more data, you get more biased data,” he adds.
Microsoft in 2017 formed the AI and Ethics in Engineering and Research (Aether) committee that helps ensure AI systems “are fair, reliable, and safe, private and secure, inclusive, transparent, and accountable” and reduces bias.
“We goes through the Aether committee where it looks for the models that have been built,” emphasises Sandhu. “Are they biased or are they unbiased? We have fairly high testing we put them through.”
Having said that, Microsoft does provide customisations to its customers, which allows it to meet specific demands. The company at the Build conference showcased its Conversation Transcription Service that comes as a good example of a customised offering. The service is capable of transcribing multi-user conversations in real time after processing custom datasets.
“All these AI services can be customised with customers’ data,” explains Sandhu. “But that customised model and customised data live in the customer subscription — in their account and not in our account. We give them the techniques and allow them to use it themselves or how they can do it, but the data lives in their walled garden.”
At this year’s Build, Microsoft demonstrated plenty of offerings that have been deployed by its partners. The keynote presentation by CEO Satya Nadella featured BMW, Schneider Electric, Starbucks, and Toyota Motor, among others, as the key partners already using Microsoft solutions to bring new AI experiences to their customers.
However, Sandhu points out that apart from being an enabler for corporate customers, Microsoft leverages its partnerships to improve its own existing and new AI developments.
“We work in a six-month planning cycle,” says Sandhu. “We have always this research work happening that goes to the Aether committee and then say we can launch these things. But then we have a lot of customers who are coming, from Starbucks to Schneider and British Petroleum and various governments also. That’s what majorly helps us do our planning.”
Microsoft also uses a phase called private preview to work hand-in-hand with select customers to validate their use cases and understand whether the new solution is capable of going live as a public preview.
Sandhu reveals that Azure Machine Learning is one of the key products that has evolved in the industry through customer feedback, as have newly launched solutions such as MLOps and Form Recognisers.
Microsoft is also keen to underline the importance of openness when it comes to AI. The company embraced various open-source platforms and started contributing to projects such as MLflow. Further, it announced a preview of Azure Open Datasets that gives rich, curated open data.
Sandhu says that the nature of openness is vital to keep growing in the AI world. “What we’ve learnt from companies is that you can’t go and say, ‘Please use my solution end-to-end.’ It never happens,” he affirms.