AI Tooling for Your Dev Team: To Adopt or Not to Adopt?
As AI tooling becomes more popular, it's important to know the risks and benefits of adopting it. CodiumAI's Itamar Friedman joins Dev Interrupted to help out.
Join the DZone community and get the full member experience.
Join For FreeAmid the escalating buzz surrounding AI tools, many development teams grapple with deciding which ones suit their needs best, when to adopt them, and the potential risks of not doing so. As AI continues to pose more questions than answers, the fear of falling behind the competition lurks for many.
This week's episode of Dev Interrupted aims to dispel these uncertainties by welcoming CodiumAI’s founder and CEO, Itamar Friedman. In one of our most illuminating discussions this year, Itamar pierces through the AI hype, explaining what AI tools bring to the table, how to discern the ones that would truly augment your dev teams, and the strategies to efficiently identify and experiment with new tools.
Beyond the allure of AI, Itamar doesn't shy away from addressing its pitfalls and adversarial risks. He also probes into the future of the developer's role in an increasingly AI-driven landscape, answering the question: “Will there be developers in 10 years?”
“One risk is that the outputs of AI, at least in the generative mode, is that it naturally goes with a common well-versed solution. Like some kind of a lower, common denominator because it's trained on a lot of data and it's going for what's common in the data.”
Episode Highlights
- (2:40) Founding CodiumAI
- (8:25) Will there be developers in 10 years?
- (11:20) What kinds of AI tools are popping up?
- (15:00) Core capabilities of AI
- (19:30) Finding AI tools to solve pains you don't know you have
- (23:00) Enabling your team to use AI
- (26:45) Falling behind the competition
- (33:00) Pitfalls of AI
- (38:30) Adversarial risks of AI
- (43:45) Experimenting with new tools
- (47:40) Measuring the success of AI tools
- (50:15) Will AI replace or empower us?
Episode Excerpt
Yishai Beeri: What about adversarial risks in AI? Some ways where AI can be manipulated or leveraged to harm me in an intentional way.
Itamar Friedman: Okay. Let's give an example just to — for those listeners that are not well aware, what is adversarial?
My point is gonna be that I sync, like in software development, it's relatively rare right now. You'd need to be aware of it, but try to sync if it's possible for you. Let's say that you're building autonomous driving software and with cameras and everything. And now, you're driving the car, let's say level four or whatever, like autonomously. And you were trained on all the data that exists in the world that was available to the company, to the team that is training the models. What happened if somebody, for example, some pedestrian holds a huge screen, and it's right now working like it's showing some sign or something like that.
And you can actually cause the car, maybe it even shows your right turn where there wasn't any right turn. What could happen? That's an adversarial case, like most probably, like they didn't see this case. By the way, I'm not saying that they're not aware of it and probably they're trying to train against it, like being active, proactive and having an adversarial model or adversarial data within their data set to train a generative or the analysis part of the AI.
So having said that, there [still] is an option to create an adversarial event.
Published at DZone with permission of Yishai Beeri. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments