Industry Stories Q&A: Alumnus Trevor Sullivan on Human Language Technology and AI

Jan. 9, 2024
Image
Trevor Sullivan

Trevor Sullivan

Trevor Sullivan earned his M.S. in Human Language Technology in the Department of Linguistics from the University of Arizona in 2017. After finishing his master’s degree, he went to work for Ayfie, a software company where he also completed his internship. Trevor gained experience in various engineering roles before taking a position in October 2023 with a large financial services company as a senior artificial intelligence conversational engineer. Recently he reflected on the pathway to his current role, what upcoming graduates should consider in charting their professional path, and the importance of responsible AI creation and stewardship.

 

Why did you choose UArizona?

The University of Arizona sort of chose me. I started my undergraduate there in linguistics and music, but on a lark, I took a Linux system administration class and fell deeply in love with computers and programming. Then I learned that the HLT program existed, and I oriented the rest of my undergraduate program toward being able to do that.

When you decided on your program, what professional future did you envision?

I wasn’t really angling toward anything specific. I knew I was interested in syntax and semantics and their applications, like text processing. Eventually, my curiosities pushed me toward text generation, comprehension, and information retrieval. It was those interests that guided me toward finding a job afterward. When I came out of the program, I knew search was my thing so I found somewhere I could do that and it was largely enabled by an internship with Ayfie, where I also ended up getting my first job.

What is a conversational AI engineer and how does that role work within a financial services company? Can you share a real-world project example?

Currently, I’m building and supporting design tools for conversational interfaces, which right now, means virtual assistants for customer-facing employees as they look things up. On a typical day, I’ll be interacting with people in the organization who don’t have linguistic or technical expertise but have some kind of problem that they think a conversational AI tool can solve. I do a lot of explaining what the technology is and how it works. I build demos and make out-of-the-box models and templates that can be used quickly to stand up in these kinds of systems in lots of different settings.  My current project combines some vector-based knowledge, search engines, and generative AI technologies including LLM (a large-language model like Chat GPT) to find information for users and explain it in an approachable way. It’s kind of like your artificial friend who’s really good at Googling. The system architecture I’m using to build it with is a specific retrieval-augmented generation class of tool that is optimized for our needs. It’s an artificial, conversational assistant that has some knowledge, can look things up, and can rephrase and logically apply information. Hopefully, it saves time for call center workers and customer service people, letting them focus on more interesting things rather than searching for information.

Aside from a doctorate or teaching role, what’s the academic path for someone with a graduate degree in HLT?

My understanding is that there are research groups in government and other private organizations that might be attractive to people who have a Ph.D. and want to stay in a research-type position. These include finding new ways to apply the technologies and push things forward — maybe working with an entity like Open AI, which is primarily a research firm.

What was your experience like going into industry work?

Getting my first position out of university was pretty smooth, especially because I had connections through my internship, which is where I decided to stay. I did have two other offers though. One was a Canadian news aggregator company. That was in 2017, coming out of the big spate of the fake news crisis of the years before when things made up by Macedonian teenagers were on the front page of every social media. So, I thought, you know what, this might not be the industry I want to get into right now.

How did UArizona’s Human Language Technology program prepare you for your current role?

I’ve actually got some of my old school notebooks on my desk right now. My program gave me so many fundamentals for statistics and linguistic program design, which are essential to understanding the behind-the-scenes operation of the tools I’m using. Even if you’re not developing them yourself, it’s important to know how they work. Just recently, I was explaining to my team what the score means on a search API — I was pulling up algorithms for BM 25 and talking about what we should take it to mean and not take it to mean. In short, the data management and processing skills I learned are essential and universally applicable, as is the ability to understand algorithmic time and space complexity and to get data in and out of systems, which is essential for any kind of tech work.

What advice do you have for current HLT students who want to know more about industry work and what they have to look forward to?

Here’s my hot take and what I tell people all the time: Intro to AI should be a general education class. INFO 550 Artificial Intelligence with Clay Morrison changed my life more than any other class I’ve ever taken because the study of AI is the study of effective decision-making. The lessons you gain from learning how to produce algorithms that make good decisions universally apply to almost every aspect of life. Also, take the data ethics class. The field is at risk of doing harm — especially accidentally — and we’ve seen that harm done with advanced machine-learning algorithms. Data ethics teaches you how to detect bias in algorithms and how to think through consequences, to be responsible for the harm your creations can do.

Coming right out of the program, you’ve got some very useful, very powerful tools that can be used to do some really cool things. Take some initial risks and maybe try something like participating in somebody else’s startup even though it might crash and burn. It probably will. But you learn so much in that process and become a way more valuable collaborator for larger things later. Our field is really in the spotlight right now. In the past, this field had mostly well-intentioned people who moved fast and broke things. Now, we’re seeing in addition to those people, an increasing volume of opportunistic grifters who have put the space at risk in terms of its public reputation and ability to continue into the long term. But, I think those who are graduating now are very well-positioned to be good representatives for technology and push people further in good directions.

Students finishing the program are ambassadors for language technology — which everyone is starting to care about. They can help their friends, family, and coworkers who want to use these tools, understand better what they can’t do, and the risks involved. A lot of people are putting sensitive information in prompts and that’s not great. HLT program graduates understand why and that makes them extremely valuable as business consultants. It’s very easy for people to fall prey to normative biases and hallucinations that models will often develop, and I am worried about that risk. Ultimately, people coming out of the HLT program have the opportunity to be highly valuable citizens of the world — to help society navigate through the rapid changes that are happening now and coming.

 

Ultimately, people coming out of the HLT program have the opportunity to be highly valuable citizens of the world — to help society navigate through the rapid changes that are happening now and coming.

 

##