DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Modern Test Automation With AI (LLM) and Playwright MCP
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • AI-Driven Test Automation Techniques for Multimodal Systems
  • Artificial Intelligence, Real Consequences: Balancing Good vs Evil AI [Infographic]

Trending

  • Using Python Libraries in Java
  • Manual Sharding in PostgreSQL: A Step-by-Step Implementation Guide
  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • Building a Real-Time Audio Transcription System With OpenAI’s Realtime API
  1. DZone
  2. Data Engineering
  3. AI/ML
  4. How to Improve the Developer vs. AI Relationship

How to Improve the Developer vs. AI Relationship

Less than half of software developers trust the accuracy of AI tools. Here are five ways that we can build a healthier relationship with AI.

By 
Knut Sveidqvist user avatar
Knut Sveidqvist
·
Oct. 01, 24 · Opinion
Likes (2)
Comment
Save
Tweet
Share
2.8K Views

Join the DZone community and get the full member experience.

Join For Free

Developers are often skeptical of artificial intelligence. Count me as one of them.

But the potential of AI tools — in the right context, of course — should outweigh our skepticism, regardless of how justified it may be. Like it or not, the path to faster releases and better products runs through AI. It’s on the development community to adapt.

There’s no denying the gap between coders and the wealth of AI tools on the market. According to data from Stack Overflow’s 2024 Developer Survey, only 43% of devs trust the accuracy of AI tools. Nearly half (45%) say that AI tools struggle to handle complex tasks. 

Some AI tools simply aren’t effective. Other platforms are injecting AI into areas where it’s not particularly helpful. There's a difference between doing a circus act with AI and actually using it to impact your workflow.

As a founder and CTO, I’ve had experience with both sides of the coin. I use AI tools such as ChatGPT to help me code, and my team has also worked hard to implement an AI assistant into our own product. Through these experiences, I’ve made a few observations about optimizing the impact of AI in a way that developers can still trust:

1. Find the Right Context for Using AI

At this point, we can’t trust AI on its own to write code from end to end. But that doesn’t mean we can’t use it to boost our efficiency. It’s all about establishing expectations.

I’ve written a lot about AI’s potential to remove some of the manual burden from developers’ workloads. Traditionally cumbersome tasks such as generating diagrams can easily be accelerated with the help of AI: the AI helps create a starting point for the chart, and the dev comes over the top to add their expertise and customization.

I personally use ChatGPT to write code. Sometimes I’ll ask the AI to help me generate an API, create a test suite, and generate tests for the API. It greatly reduces my usual timeline for testing code and identifying errors. 

AI can help generate automated tests based on a template. Afterward, the developer should carefully review the test scripts. At this point, you can generate the actual code and run the tests on it until they go through.

This is typically most effective for writing standard, almost boilerplate code. If you give AI clear borders, it works well — and saves plenty of time.

2. Crowdsource Creative AI Uses

The dev world played a huge role in inspiring our team to integrate AI into Mermaid Chart. I remember coming across a video of a developer using ChatGPT to build his own Mermaid diagrams. The thought hit me: What if we integrated this into our product? Another win for the open-source community!

Long story short, developers should lean on each other when it comes to recommendations of tools and ways to use AI efficiently beyond the hype. 

3. Reduce Hallucinations From AI Systems

Hallucinations are likely a major source of distrust in AI. Some generative AI models produce hallucinations — aka misleading or incorrect responses — up to 27% of the time. This might not seem like much, but getting burned even once by a hallucination can put your guard up.

This makes it extremely important to vet any responses from AI systems. You can generate code that looks perfect, but isn’t. I’ve caught myself saying, "That API is perfect — how could I have missed it?", only to realize that API, in fact, does not exist. 

Chatbot hallucinations are probably here to stay for a while. But the more that AI manufacturers can limit these misleading responses, the more developers can trust the outputs. The more safeguards on how we use AI, the more we can limit the problem — such as making sure it recognizes proper test suites.

4. Maintain a Grip on the Rudder

Building trust is a two-way street. Humans also need to take accountability for using AI responses in their work.

Even if quality doubles and hallucinations are cut in half, humans should always be captaining the ship. Cutting corners and taking AI responses as gospel will tarnish the actually useful and effective applications of AI in software development. We need to pay attention to how we’re using AI and make sure that the creative, strategic elements of our human thinking are in control.

In many ways, the conversation around AI trust should be more about inward reflection than an outward examination of the quality of AI tools.

5. Establish a Common Understanding Around AI Usage

We’ve really only reached the tip of the iceberg when it comes to AI evolution. The trust conundrum isn’t going away — and AI models are only going to increase in quality.

Will we get to a point where AI can replace every knowledge worker? Or have we reached a plateau? Will we be able to launch autonomous AI agents to do work for us? At what point does this all become harmful?

That’s why it’ll be important for governing bodies to establish some common standards around AI usage. It would be nice to see some alignment across nations — perhaps between the EU and the US — to work together to establish frameworks. 

There’s a lot of sizzle around AI. But there’s also a lot of substance. It’s important to understand the difference between the two. And building comfort and trust with these systems will require some inward reflection, creativity, and initiative. 

The potential results that await developers are exciting, to say the least.

AI Testing generative AI Software developer

Opinions expressed by DZone contributors are their own.

Related

  • Modern Test Automation With AI (LLM) and Playwright MCP
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • AI-Driven Test Automation Techniques for Multimodal Systems
  • Artificial Intelligence, Real Consequences: Balancing Good vs Evil AI [Infographic]

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!