“Applied AI/ML in the Workplace – Geek Food for Thought” – UT Austin presentation

Last Thursday we presented “Applied AI/ML in the Workplace –
Geek Food for Thought” at the University of Texas in Austin Computer Science department. Thomson Reuters is one of the Friends of the University of Texas at Austin that gives students an excellent opportunity to engage with the industry and learn more about companies that offers internships or job opportunities.

The speakers were me and Katherine Li, data scientist in my team. Special thanks to our co-workers and UT Austin alumni, Cameron Humphries, Director of software engineering, and Matthew Hudson, software engineer at Thomson Reuters. Also special thank you to Jennifer Green, senior talent acquisition partner in Thomson Reuters HR, and Ana Lozano, events program coordinator at UT Austin who helped set up the talk. More importantly, thank you UT Austin students for attending the event knowing that we missed more students because of conflicting class schedule and mid term exams.

I first talked about Thomson Reuters the company with a 100-year history, a global company, and its top-notch technology and careers development programs. I ran a video of our CEO and president, Jim Smith, explaining what makes Thomson Reuters Thomson Reuters. I then highlighted who are founding fathers of Thomson Reuters, beginning with Paul Reuter who founded Reuters News in 1851 and Roy Herbert Thomson who founded the company in the 1930’s which later became known as Thomson Corporation. Both companies later merged in 2008 and became Thomson Reuters. I hope to have made my point to the young audience that the Thomson Reuters founding fathers, Paul Reuter, who pioneered telegraphy and news reporting starting with pigeon posts, and Roy Herbert Thomson, First Baron Thomson of Fleet, were both entrepreneurs at a similar age as them.

I then provided an overview of Thomson Reuters Labs and listed some of the key innovative products including the latest WestLaw Edge, the most advanced legal research platform ever. I then moved to talk about AI and ran a video for our TR Labs CTO, Mona Vernon, speaking to The Economist early this year about AI and machine learning revolution. That was a great segue way to the main topic of the presentation, and that is applied AI in the workplace.

Through a couple of slides, I tried to make the point that students in the field of machine learning and artificial intelligence need to consider applying existing algorithms for their projects or their next start-up idea instead of building everything from scratch. It is quite understandable that students need to understand or even seek too contribute to the advancement of the core algorithms in the artificial intelligence. That is great and is very important but, unfortunately, it does not always lead to the next innovation or the next best product out there. The markets are hungry for applying artificial intelligence in the quickest time possible and in all the different ways that would have a societal impact. To illustrate the point, Katherine Li and I showcased four projects that leverage machine learning and natural language processing algorithms. We managed to get the applications working in a short time because we leveraged available cloud-based solutions notably Amazon AWS and Google Cloud and added our code for the projects. By spending less time building machine learning algorithms, we were able to focus more on the ideas and tie the different components into working prototypes.

You can check the presentation on this DropBox link (note: please download the deck and run the slides in presentation mode so that you can access the videos)

Applied AI/ML in the Workplace – From Labs to Product, Real Use Cases at Thomson Reuters

Looking forward to presenting “Applied AI/ML in the Workplace – From Labs to Product, Real Use Cases at Thomson Reuters” at The University of Texas at Austin computer science department tomorrow. We will highlight some cool products that recently came out of the Thomson Reuters Labs and showcase how easy, beneficial, and cheaper nowadays for students to power their software with AI/ML algorithms. Check the event details

“AI & I at Work” – University of Texas at Dallas MIS Club Presentation

Update: Link to the presentation via DropBox  Note: you need to download the presentation and turn it into presentation mode to access the videos. Also, GitHub link for Sentiment-Tweets  and GitHub link for AutoML Course Reviews

I love my University of Texas in Dallas MIS Club audience! I started with running videos of key events in the history of data science and AI since the 60s. I never imagined that I would show Charlie Chaplin in the 80s IBM PC commercial or talk about the first desktop computer of the 60s, the Olivetti Programma 1, but I did :). I then talked on how the Python programming is a common instrument for both types of data scientists, the analytical and the AI product building. I promoted Jupyter Labs over Jupyter notebooks and encouraged the audience to leverage cloud notebooks using Google Collab. Then we went onto the cool stuff that I prepared for the event. I showed case my quick and easy implementation of building a Twitter sentiment analysis product with Python, Tweepy APIs, Django, Google Cloud NLP, and Docker containers. I then walked the audience in how I built a production-ready customers reviews rating model in less than 4 hours using Google AutoML for language processing and a Kaggle dataset that I found through Google new dataset search engine. After that, we had fun recognizing objects in the auditorium with Amazon DeepLens camera after I deployed a pretrained neural network model for object detection. Time went so fast when it is all love for computers. #loveoflearning

English on non-English NLP and machine learning projects

Whenever I ask a bilingual (English + another language) students or professionals working on machine learning or artificial intelligence if they considered doing a project in AI for their nonEnglish mother tongues language, such as Hindi or Spanish or Arabic, they look at me puzzled and surprised. Yes, there are a lot of publications on all sorts of languages but how often do you see innovative products in the market for non-English customers even in English-speaking nations? The US has a huge immigration population and houses neighborhoods that don’t even speak English. Why not develop more intelligent products with the aid of deep learning that targets nonEnglish recipients and not just comes up with another translation software every time? We need to think beyond the status-quo of research, products, software, and publications that are predominately English. It is challenging; I admit because it is so easy to code in English with an English programming language syntax, editor, OS, GUI and it is also hard to find nonEnglish corpus. Mandarin is an exception in all this here. It is not impossible to do more for nonEnglish speaking societies.