Dallas Texas metroplex was the home for my family and me for over 18 years. Our kids were born there, and our major part of our careers was formed there as well. Dallas is an excellent place to raise a family. The entrepreneurial scene is picking up fast nowadays in the area which makes Dallas a great place to start business as well. My family and I built great memories and friendships over time in Dallas, but it is time for us to depart this great city that loved us just as we loved it. Time for a new venture at a beautiful city. Goodnight, Texas, and goodbye 🙁 Hello, Vancouver Washington, Camas, Portland, and Banfield Pet Hospital! A new beginning starting tomorrow! 🙂 – with Mayssaloun Tay
I am so excited to announce that next week I am joining Banfield Pet Hospital, the largest veterinary hospital in the world, at its headquarters in Vancouver Washington as Senior Director of IT Growth & Enablement. Banfield is part of Mars family of businesses with thousands of hospitals and over 18000 associates working to provide top healthcare services to our pets. In a market that doubled in growth in just a decade with 70% of US households owning one or more pet, three fourth of millennials owning dogs and 50% owning carts, I am so excited that I am now part of the team that wants to save lives. Not only that but this is a field that is prime for advanced technologies and innovation using ai, virtual reality, augmented reality, and more.
It is my 3rd year volunteering at Lamar Middle School (Lewisville ISD), a nearby school to our house in Texas. I kept accepting the volunteering task of presenting at its career fair even after my son moved on to high school because I felt a sense of purpose of encouraging new generations in the field of computer science. I estimated to have presented to around 150 8th graders at the school in total between all three times combined. I would first talk about about my past and what I do at the time with Thomson Reuters, and then get into my passion – technologies. Below are some slides about that I presented between 2017 and 2018.
This year I learned to summarize my background in one slide
After that I went into technologies. In 2017 I brought with me to school various gadgets including a Raspberry Pi, a maker robot, an Arduino, and various gadgets so as to show case what one can with technology. In 2018, I thought of focusing on AI and began showcasing through videos what artificial intelligence is about.
I showed YouTube videos of a MIT cheetah robot, an Audi self-driving car and a chatbot interaction between Alexa and Siri
After that I felt that I am lecturing the students by describing how to become a great computer scientist
But this year, I decided to make things more interactive by showcasing actual implementations and making the topic more aligned to their age of playing games! Hence, I focused on how gaming and artificial intelligence align together and how it can be cool to be the next programmer of such algorithms rather than simply playing Fortnite, League of Legends, Dota 2 and more.
I focused on the topic of leveraging cameras to let computers learn from players or people. Showed Chintan from DeepGamingAI video “creating custom Fortnite dances with webcam and Deep Learning”
and went over the videos from Farza “DeepLeague: leveraging computer vision and deep learning on the League of Legends mini map + giving away a dataset of over 100,000 labeled images to further esports analytics research” and explained how machines can learn through cameras
After that, I brought out Amazon AWS Deep Lens that I had previously setup for the event by deploying the object recognition model that recognizes 20 objects: airplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, dining table, dog, horse, motorbike, person, potted plant, sheep, sofa, train, and TV monitor. I projected the camera at my audience and began explaining the process of neural networks, object detection, object classification, and what can reinforcement learning do if we expanded the project.
After that, I brought out my Google AIY Vision kit and explained how it works. To my bad luck I could not get the device to do more than beeping when it sees a face. Its code should make different colors when I make a happy face or a frown face. I had it working all along in the past, but this time I realized that I brought the wrong lens with me. Nevertheless, I explained how students can build such kits by showing them what’s in it.
Overall, there were 45 students this year that came to my all day demo. They would come in groups of 5 to 7 coming every 20 or 25 minutes. I would answer questions about my career and what I do at Thomson Reuters. Some asked my about my day to day work at Thomson Reuters, what my challenges and what makes me interested at work. My answers were honestly true and simple: I love to take challenging projects and focus on solving them with the team just like we would playing the next adventure game and solving its puzzles.
My key take from the whole experience is that most of the students love games, and that I used to make my point that they can leverage computers in whatever they love to do. Note that some said they want to do something in law enforcement, law, media, medicine, and not necessarily games or computer science. Even that, I felt that I had an answer which is computer programming and artificial intelligence is useful in whatever career they wish to do. Hopefully the students left my sessions with more interests in computers and programming. That’s the goal. I hope it worked for them just like computers worked for me in my entire life and continuous to do so through innovation and problem solving.
I am showcasing today AI gadgets to Grade 8 midddle schoolers at Lamar Middle School (Lewisville ISD) next to my house. The students have a career fair event, and I hope to encourage future programmers to the field of technology. I am taking with me Amazon DeepLens and built-it-yourself Google AIY Voice. I plan to show them real time object classification with DeepLens/pretrained MXNet neural network followed by audio response to facial expression (smiling or frowning) with Google AIY/Raspberry Pi Zero/PiCamera. Hopefully it would do the trick and get students excited with AI just like us adults.
It was a pleasure and honor to represent Thomson Reuters at the panel on “scaling without stagnating” as part of corporate innovation. The event was hosted by Capital One and sponsored by Dallas Innovates as part of #dallasstartupweek. The panelists Dalia Powers (CBRE VP/CIO & panelist moderator), Sterling Mah Ingui (Head of Go To Markets Fidelity Labs), Scott Emmons (The Current Global CTO), Sean Minter (AmplifAI CEO), Charlie Lass (MIT investor) and myself Tarek Hoteit (Thomson Reuters Labs) took turns discussing leadership, people, organization, process/change management, and technology to support innovation in the corporate world. For me it was also an opportunity to let startups in Dallas to know about Thomson Reuters, Thomson Reuters Labs (http://labs.tr.com), and our community engagements in Dallas. I even shared my personal journey on a major transformation of a product as part of corporate innovation hoping to encourage everyone not to give up and do the same and more. We also answered questions from the audience such as how startups can interact with corporate (my answer: persistence is key but if someone from corporate is ignoring you, find another contact. Don’t give up)
I presented a 15-minute lightning talk on leveraging AutoML for sentiment analysis.
Quick and easy AutoML for Sentiment Analysis and Classification tasks
“Machine learning algorithms have evolved significantly in the last few years. AutoML is one of the latest advancements in the field that allows anyone to build and deploy AI products without requiring extensive knowledge in the field. The lightning talk will show case how one can build a production-quality sentiment analysis model using Google AutoML and Google Cloud with the least coding possible.”
I first showed case how to upload datasets directly into Google AutoML NLP portal and, from there, train a model and perform predictions. After that, I showed how I integrated the sentiment analysis model into analyzing Twitter stream using Django, Docker, Twitter API/Tweepy, Jupyter Notebooks, and PostGresQL, I published the code on GitHub under hoteit/courses-sentiment-tweets
Last Thursday we presented “Applied AI/ML in the Workplace –
Geek Food for Thought” at the University of Texas in Austin Computer Science department. Thomson Reuters is one of the Friends of the University of Texas at Austin that gives students an excellent opportunity to engage with the industry and learn more about companies that offers internships or job opportunities.
The speakers were me and Katherine Li, data scientist in my team. Special thanks to our co-workers and UT Austin alumni, Cameron Humphries, Director of software engineering, and Matthew Hudson, software engineer at Thomson Reuters. Also special thank you to Jennifer Green, senior talent acquisition partner in Thomson Reuters HR, and Ana Lozano, events program coordinator at UT Austin who helped set up the talk. More importantly, thank you UT Austin students for attending the event knowing that we missed more students because of conflicting class schedule and mid term exams.
I first talked about Thomson Reuters the company with a 100-year history, a global company, and its top-notch technology and careers development programs. I ran a video of our CEO and president, Jim Smith, explaining what makes Thomson Reuters Thomson Reuters. I then highlighted who are founding fathers of Thomson Reuters, beginning with Paul Reuter who founded Reuters News in 1851 and Roy Herbert Thomson who founded the company in the 1930’s which later became known as Thomson Corporation. Both companies later merged in 2008 and became Thomson Reuters. I hope to have made my point to the young audience that the Thomson Reuters founding fathers, Paul Reuter, who pioneered telegraphy and news reporting starting with pigeon posts, and Roy Herbert Thomson, First Baron Thomson of Fleet, were both entrepreneurs at a similar age as them.
I then provided an overview of Thomson Reuters Labs and listed some of the key innovative products including the latest WestLaw Edge, the most advanced legal research platform ever. I then moved to talk about AI and ran a video for our TR Labs CTO, Mona Vernon, speaking to The Economist early this year about AI and machine learning revolution. That was a great segue way to the main topic of the presentation, and that is applied AI in the workplace.
Through a couple of slides, I tried to make the point that students in the field of machine learning and artificial intelligence need to consider applying existing algorithms for their projects or their next start-up idea instead of building everything from scratch. It is quite understandable that students need to understand or even seek too contribute to the advancement of the core algorithms in the artificial intelligence. That is great and is very important but, unfortunately, it does not always lead to the next innovation or the next best product out there. The markets are hungry for applying artificial intelligence in the quickest time possible and in all the different ways that would have a societal impact. To illustrate the point, Katherine Li and I showcased four projects that leverage machine learning and natural language processing algorithms. We managed to get the applications working in a short time because we leveraged available cloud-based solutions notably Amazon AWS and Google Cloud and added our code for the projects. By spending less time building machine learning algorithms, we were able to focus more on the ideas and tie the different components into working prototypes.
You can check the presentation on this DropBox link (note: please download the deck and run the slides in presentation mode so that you can access the videos)
Looking forward to presenting “Applied AI/ML in the Workplace – From Labs to Product, Real Use Cases at Thomson Reuters” at The University of Texas at Austin computer science department tomorrow. We will highlight some cool products that recently came out of the Thomson Reuters Labs and showcase how easy, beneficial, and cheaper nowadays for students to power their software with AI/ML algorithms. Check the event details
Update: Link to the presentation via DropBox Note: you need to download the presentation and turn it into presentation mode to access the videos. Also, GitHub link for Sentiment-Tweets and GitHub link for AutoML Course Reviews
I love my University of Texas in Dallas MIS Club audience! I started with running videos of key events in the history of data science and AI since the 60s. I never imagined that I would show Charlie Chaplin in the 80s IBM PC commercial or talk about the first desktop computer of the 60s, the Olivetti Programma 1, but I did :). I then talked on how the Python programming is a common instrument for both types of data scientists, the analytical and the AI product building. I promoted Jupyter Labs over Jupyter notebooks and encouraged the audience to leverage cloud notebooks using Google Collab. Then we went onto the cool stuff that I prepared for the event. I showed case my quick and easy implementation of building a Twitter sentiment analysis product with Python, Tweepy APIs, Django, Google Cloud NLP, and Docker containers. I then walked the audience in how I built a production-ready customers reviews rating model in less than 4 hours using Google AutoML for language processing and a Kaggle dataset that I found through Google new dataset search engine. After that, we had fun recognizing objects in the auditorium with Amazon DeepLens camera after I deployed a pretrained neural network model for object detection. Time went so fast when it is all love for computers. #loveoflearning
YouTube recording of a presentation I gave earlier this year (check previous post). In the session, I discussed my passion in the field of computers and showcased various machine learning techniques and applications using Google AIY using Raspberry Pi, Google Collab, and even a neural network on a Commodore 64.