AI Explainability & Data Scientist Without a Degree?!
Interview with Yotam Azriel, CTO at TensorLeap - What's AI episode 13
Good morning fellow AI enthusiast! This week's iteration focuses on explainability AI with a new podcast interview alongside the fantastic Yotam Azriel from Tensorleap! Discover how the power of passion, curiosity, and focus can accomplish wonders! And here's a little secret... Yotam's success story doesn't come with a degree. He's the ultimate data scientist who defied the university odds!
1️⃣ CodiumAI | Generating meaningful tests for busy devs
Get non-trivial tests (and trivial, too!) suggested right inside your IDE, so you can code smart, create more value, and stay confident when you push.
2️⃣ What is Explainability AI? With Yotam Azriel, CTO at TensorLeap - What's AI episode 13
This week on the What's AI podcast, brace yourself for an awe-inspiring conversation with Yotam Azriel, the co-founder of TensorLeap. He'll take you on a journey through his remarkable experiences, mind-boggling insights, and vision that will leave you amazed. Discover how the power of passion, curiosity, and focus can accomplish wonders! And here's a little secret... Yotam's success story doesn't come with a degree. He's the ultimate data scientist who defied the university odds!
Before you commit to a knowledge-packed hour-long discussion, let's give you a sneak peek of what's in store this week.
Yotam Azriel, the non-traditional academic explorer, embarked on a scientific adventure from a young age. He delved into captivating realms like magnetic fields in physics, wireless charging technology, and the AI universe itself. These mind-expanding experiences sculpted his knowledge and prepared him for his entrepreneurial escapades.
Yotam's learning approach will blow your mind—hands-on experience all the way! He fearlessly immerses himself in unfamiliar domains with crystal-clear expectations, goals, and deadlines. This excellent method keeps him focused on his objectives, and it's an inspiration for all you self-learners out there (if you're reading this, you're definitely one of them!).
Prepare for a deep-sea dive into the realm of TensorLeap—an AI startup making waves with their applied explainability platform. They empower data scientists and developers working with AI models, tackling the enigma of complex AI behavior head-on. Brace yourself for a revolution in the world of explainable AI—a field that's as thrilling as discovering hidden treasure.
Want to unlock the mysteries of Explainable AI? The ultimate goal is to shed light on decision-making for users and unravel the mathematical techniques behind those neural networks. TensorLeap focuses precisely on the latter, offering you invaluable insights into the world of AI systems.
Ready to take the plunge into the AI universe? Yotam Azriel has a nugget of wisdom for you—go for something applied and tangible! Set clear goals with real-world outcomes, and you'll have the motivation you need to conquer the exhilarating realm of artificial intelligence. Get ready to be AImazed!
Don’t miss this captivating podcast episode with Yotam Azriel as our special guest, interviewed by me (Louis Bouchard) for the What’s AI podcast. Tune in on Apple Podcast, Spotify, or YouTube and expand your knowledge of Explainable AI!
3️⃣ AI Ethics with Auxane
Hey there, fellow AI enthusiasts! Today, we're diving deep into the topic of AI ethics and talking about why explainability is such a crucial aspect.
At its core, explainability means that we understand how and why an AI system makes its decisions. This is important for several reasons. Firstly, without understanding the reasoning behind a decision, we can't trust the system's output blindly. This is particularly important in fields like healthcare or finance, where a wrong decision can have significant consequences. Secondly, without explainability, we can't identify and correct any biases, or errors in decisions that may exist in the system. This is vital to ensure fairness and inclusivity, but also robustness of a system!
However, there are some major challenges to achieving explainability in AI systems. When we talk about explainability, we can see two angles; the technical transparency - usually targeted at engineers or technical experts, and understandability - for all types of users, from a tech company CEO to your baker.
As you can imagine, understandability is a big challenge in itself! You cannot explain the same way things to a highly digitally literate person and to your grandparents. Thus, one of the biggest challenges is variation in digital literacy. Not everyone has the technical background to understand the complex algorithms and processes behind AI. This means that even if an explanation is provided, it may not be easily comprehensible to everyone. This could lead to a lack of trust in the system or even scepticism towards AI as a whole. In turn, this scepticism will impact adoption of technologies, and acceptability.
Another challenge is the diversity of cultures we have the chance to see in our world. Different cultures may have different expectations around explainability, and ways to understand - the same way we know counting can be different depending on the culture. For example, some cultures may value transparency and a clear understanding of decision-making processes through visualisation, and a lot of details, while others may prioritise accurate outcomes over explanations. This means that AI developers must be aware of cultural differences and adapt their systems according to their target population.
In conclusion, explainability is a crucial aspect of AI ethics that enables us to trust and understand AI systems. However, achieving explainability is not without its challenges, including digital literacy and cultural differences. As we continue to develop and implement AI systems, we must strive to achieve an acceptable level of explainability while being aware of these challenges and working to overcome them.
I wish you all a great week! - Auxane Boch (iuvenal research consultant, TUM IEAI research associate).
We are extremely grateful that the newsletter is now read by over 12'000+ incredible human beings counting our email list and LinkedIn subscribers. Feel free to reach out to contact@louisbouchard.ai with any questions or details on sponsorships. Feel free to follow our newsletter at Towards AI, sharing the most exciting news, learning resources, articles, and memes from our Discord community weekly.
Thank you for reading, and we wish you a fantastic week! Be sure to have enough rest and sleep!
We will see you next week with another amazing paper!
Louis