ChatGPT has been in the news a lot recently. The hype around it continues, and many are concerned that this AI service might put many out of work.
Developers in particular feel threatened by this AI bot’s ability to write code on the fly. The general consensus is, however, that code-writing jobs for humans are safe for now.
Tech advisor Bernard Marr says that ChatGPT and natural language processing technology are unlikely to render developers, programmers, and software engineers unnecessary now and in the near future.
For one, ChatGPT can only write relatively simple applications. Even if it has the skills to do more advanced coding with suitable instructions, it does not instantly provide non-developers a competitive edge over developers who understand coding and have experience in actually writing code.
Another crucial reason many developer jobs are safe is the need for secure coding. ChatGPT itself concedes that it cannot guarantee that the code it churns out is secure.
Asked if ChatGPT can ensure code security, here’s the bot’s brief response:
“No, ChatGPT does not ensure secure coding. ChatGPT is an AI language model that can assist in answering questions and generating text based on the input it receives. However, it does not have the capability to guarantee secure coding practices or conduct security assessments on code. It's important to follow established security guidelines and best practices when developing and deploying code.”
ChatGPT learns further as it is continuously updated. But its ability to incorporate secure coding practices may take some time to reach an acceptable level of maturity. Or it may never be able to perfect secure coding, given the evolving nature of the threat landscape.
What is Secure Coding?
Secure coding is a new paradigm in code development where the responsibility of ensuring code security shifts left or goes to the developer. Security is no longer a separate process, but a part of the software development life cycle (SDLC). It may not be compulsory, but it is encouraged and preferred.
Organizations that embrace secure coding gain the advantage of being able to easily comply with industry standards.
Instead of going through another stage of code scanning and testing to ensure security, the software production process is significantly shortened because bugs and other flaws are addressed before code is deployed.
It is easier to fix these problems if you can spot and resolve them during the code-writing process instead of dealing with them in a separate stage.
Secure coding may seem like added burden for developers, but it is a change worth adopting given its significant benefits. It enmeshes security with the SDLC to reduce the need for major security revisions. And it results in significantly better app security upon release.
Why GhatGPT Isn't Yet Capable of Secure Coding
Secure coding entails evolving best practices that have yet to be learned by ChatGPT. As revealed in its FAQ, its knowledge is limited to things that have been put online until 2021.
ChatGPT is not updated with the latest intelligence about new threats, vulnerabilities, and attacks. It is not specifically linked to any cybersecurity framework. It is also lacking in the following areas:
Security visibility and monitoring
ChatGPT is not designed to account for the data that gets saved in the code repository it generates. It also does not perform automated vulnerability detection monitoring scans, triaging, and the mapping of all IT assets that will be impacted by the application.
Because of this, there is no assurance that the code it produces is safe for the specific IT ecosystem where it will be deployed.
No secrets management mindfulness
There are times when developers unwittingly include secrets such as username-password pairs, API keys, and tokens in log entries. This is a no-no in secure coding, and ChatGPT does not have the mindfulness to take this into account.
No guarantee against misconfiguration
Misconfiguration is the biggest flaw in human coding. AI is supposed to help avoid this, but it is clear that ChatGPT does not offer any guarantee that there will be no configuration issues in the apps it develops.
Inability to enforce code obfuscation
Code obfuscation refers to the modification of source code or machine code to make it difficult for hackers to understand and reverse engineer it. This is one of the techniques used in secure coding, and ChatGPT says it is incapable of doing it.
Lack of ability to conduct code security reviews
ChatGPT also concedes that code review is not within its range of impressive skills. This is what the AI chatbot has to say about it:
“As a language model, I can provide information on various software security practices and suggest best practices, but I cannot perform an effective code security review on my own.”
No external data source validation
There are development projects that involve the use of pre-written code and modules from open-source or third-party sources.
Integrating these components with code written by ChatGPT may not be a good idea. ChatGPT does not have the ability to ensure the legitimacy, security, and authenticity of external data sources.
No threat modeling
ChatGPT is a general-purpose chatbot that happens to be capable of writing passable code. It is unsurprising that it does not have advanced secure coding capabilities like a multistage process for code weakness and vulnerability assessment throughout the SDLC.
The Irony of AI in Cybersecurity
Despite being tossed around as one of the most important technologies in cybersecurity, AI chatbots like ChatGPT do not actually exceed in cybersecurity.
They are effective tools in simplifying tasks in various cybersecurity processes such as the detection of threats, attacks, and anomalous behavior. But they cannot be left to their own devices to enforce effective cybersecurity.
Coding software is not as simple and straightforward as many tend to think it is in the context of the ChatGPT hype. This is not to say that AI tools like ChatGPT are not remarkable. But ChatGPT does not have the specialized knowledge and expertise to reliably address modern threats.
AI-powered secure coding tools exist to help those who want to address potential security flaws in the code they build. But code is only as good as the developer’s intentions and depth of understanding.
Coding newbies who know little about how coding works, let alone how to secure them, will have to improve their understanding of coding and cybersecurity to take full advantage of AI coding and AI secure coding solutions.
Image via Unsplash (Jonathan Kemper)