Case Western Reserve University’s Women in Tech Initiative is proud to shine the spotlight on distinguished alumna Carmen Fontana (CRW ’00; GRS ’05, engineering, leader of Centric Consulting’s Modern Software Delivery practice).
Carmen joined Centric Consulting in 2012, where she is currently responsible for strategy, business development, and client relationships within their Modern Software Delivery service offering. She specializes in Cloud and Emerging Tech, and is also involved in Centric’s Innovation program where she helps identify and cultivate new ways of solving clients’ problems.
She represents IEEE, a technical professional organization, by sharing emerging tech insights on an ongoing basis in industry articles and publications. She often shares information about loT, Artificial Intelligence, and Quantum Computing, along with her thoughts about new tech.
Over the years, Carmen has developed a broad base of experience, her clients ranging from start-ups to big corporations and technology to talent management.
We were honored to spend some time over the phone with Carmen to learn more about her inspirational approach to personal and professional balance.
Can you explain what the key principles and distinctions are in terms of artificial intelligence, deep learning, and machine learning?
Answer: There’s a lot of ambiguity and confusion about these terms. I’m personally not someone who gets hung up on proper definitions because I feel like that can create false barriers that keep people from entering a space. However, I do tend to use some very general words to explain these different spaces.
Artificial intelligence, or AI, is kind of a catch-all term, an umbrella term that refers to learning to process information and act on it, independent of humans. Humans still have to program it in the beginning, but over time, things will be able to learn or act on their own. So, AI can be machine learning, but it can also be more specific, such as autonomous vehicles or self-driving cars. It can also be virtual reality. There are just a lot of things you can categorize under AI.
Machine learning is a subset of artificial intelligence. It can be a stand-alone thing itself, or it can be something that fuels self-driving cars or virtual reality, etc. It’s essentially a mathematical way of teaching computers to learn. Machine learning is heavy on linear algebra, for anyone who’s ever taken that, and it’s just a way to enable knowledge transfer on the computers.
Deep learning is another subset. Kind of like the nested Russian dolls. Deep learning is a subset of machine learning, which uses neural networks, which is a sophisticated algorithm. You’ll see deep learning particularly used for visual recognition, being able to understand what images are, as well as voice recognition or speech text recognition. Think, Alexa. It’s machine learning with more sophisticated algorithms.
At the end of the day, it seems as if everything is going to be powered by AI in some way.
What are some of the important principles we need to note in regard to how deep learning is becoming more involved with machine learning?
Answer: When I was at Case Western Reserve University back in the ’90s, my senior project was about the use of machine learning to model climate change in a certain region of the world. At that time, our tooling was very minimal. We had to write the lines of code ourselves, and our computers were limited to whatever we could find in the computer labs. The power wasn’t there to really do very deep algorithms and we couldn’t process that much data because, again, we were limited by the hardware we had, even though it was top of the line at the time. My phone is probably more powerful than those at this point.
So, it’s been a real interesting journey to see how artificial intelligence, particularly machine learning, has changed over the 20-plus years since I’ve been an undergrad, and particularly in the last five or so years, as it’s really accelerated now that we have Cloud computing. Now we can process far more data to assist in the learning process. More data equals better learning, and we’re able to use more computationally heavy algorithms. Neural networks require a lot of calculations that we just couldn’t do previously.
The other big advancement in technology I’ve seen over the last 20 years is that machine learning has become a bit more democratized. Before you really had to understand data science, and you practically had to be a PhD, or in my case, have a PhD advising you on your project. Now we have things like low-code and no-code machine learning where people who just have a foundational layer can build machine learning models and spin them up and use them right away. That’s important because if you want machine learning (and the intelligence that comes with it) to be available for all kinds of problems and to all kinds of businesses, you have to make it accessible to all.
Do you see any societal issues pertaining to the advancement of artificial intelligence?
Answer: One of my soapbox topics has to do with the long-term ramifications of artificial intelligence. Look at what’s happened with social media over the last couple of years. When we started with social media, it was super fun and it was primarily intended to connect the world. Over time, it has really evolved, and we’ve seen that there’s a dark side, too—not only from a mental health standpoint, but it can be corrupted by nefarious global entities.
I worry with artificial intelligence that we’re going down a similar path. It’s taken us a while to learn those hard lessons with social media. We’re still in the honeymoon phase with AI—isn’t it great, we can do all these cool things? But we’re not always thinking about the downside, in terms of things like surveillance and mental health. AI can potentially be very biased, so circling back to something I said earlier, that’s why it’s really important to understand ethics and law and humanity, in addition to technology, so that technology is being implemented in a way that’s thoughtful and mindful of the world around us.
Are there any principles or general concepts related to AI and ethics and humanities that students should be especially aware of?
Answer: We are technologists, so we’re inclined to always want to lead with the technology and figure out the other pieces after that. But one lesson I’ve learned over the course of my career is that you really need to put people first. Then process. Then technology. So first understand the problem you’re solving and how it affects the people around you. Then understand how the processes have to change. And then finally figure out the technology. When we lead with technology, that’s when we have big problems down the line.
Check out our video interviews with Carmen Fontana:
- Role in tech, discussion on innovation, and suggested learnings
- Outrageous Goal Setting
- AI, Machine Learning and Deep Learning
This program would not be possible without the generous support of its sponsors, as well as supporters of the Women in Tech Initiative. Many thanks to:
- Craig Newmark Philanthropies
- Individual Donors: Ben Gomes (CWR ’90) and Deborah Weisser