Hi, AYSE_KILIC !!
Implementing this may involve some effort, but to conclude, I believe it’s possible. For instance, imagine creating a function A, and then instructing GPT to “execute function A.” GPT could trigger the execution of function A based on that command. If function A requires certain conditions to be met, you could create a function B to check those conditions. GPT could then automatically call function B, determine whether the conditions are satisfied, and execute function A only if they are. This kind of logic is certainly achievable.
By building such a system, you could create an interface where users can give commands in natural language, receive responses in natural language, and see the corresponding processes executed. However, keep in mind that every instruction incurs a cost due to token usage in API calls.
Regarding the implementation of OpenAI’s API on the backend, there are likely many examples available online. When I worked on a similar project a few years ago, I found the documentation to be well-structured, and I didn’t find the API particularly difficult to implement. That said, I’m unsure about the current state of the documentation and API. At the time, I did feel that optimizing for cost (in terms of token usage) was a bit challenging.
Also, depending on how your current implementation is set up, it’s worth noting that if your code isn’t working as expected, repeatedly pasting the entire code into GPT and asking for corrections might not lead to the desired results. Instead, I recommend breaking the code into smaller sections and asking specific, localized debugging questions, such as, “If this part is incorrect, how might I fix it?” This approach tends to be more effective for debugging.
Integrating external APIs can be an excellent learning experience, so I encourage you to take on the challenge—it can be quite rewarding! Lastly, don’t forget to store your API key securely, such as in a secrets manager, to ensure it’s used safely.