from thingsiplay@lemmy.ml to programming@programming.dev on 26 Feb 15:19
https://lemmy.ml/post/43736656
I just read how someone on RetroArch tries to improve documentation by using Copilot. But not in the sense as we might think. His approach is to let Copilot read the documentation and give him follow-up question a hypothetical developer might have. This also could be extended to normal code I guess, to pretend it being a student maybe and have it ask questions instead generating or making changes? I really like this approach.
For context, I myself don’t use online Ai tools, only offline “weak” Ai run on my hardware. And I mostly don’t use it to generate code, but more like asking questions in the chatbox or revising code parts and then analyze and test the “improved” version. Otherwise I do not use it much in any other form. It’s mainly to experiment.
#programming
threaded - newest
This is some kind of P2P code review?
Rubber ducky with more steps, cost and impact to society.
I’m not really familiar with Rubber ducky and just quickly searched the web. So it is a tool to create tests? Or what is it exactly? Is it an Ai tool? Can it read the entire code or documentation base and then pretend to be a student or developer that asks you questions about it?
I am not down playing the other issues it has, like licensing, cost, environmental impact, dependency and privacy issues. These are still an issue with such an online LLM tool. But that is not the point of my post and does not take away about a “good” use case. In my opinion.
en.wikipedia.org/wiki/Rubber_duck_debugging
Ah I see, thanks for the link. I don’t think the case I talked about is the same case as Rubber duck debugging. It’s not to read aloud (maybe in front of another programmer or audience). It’s more like, if your students or end user read the documentation and still have some questions left at the end. And ask you the questions about stuff they did not understand.
Rubber ducky can be anything you want it to be and has solved more bugs to date than all the LLMs combined. en.wikipedia.org/wiki/Rubber_duck_debugging
I don’t think the case I talked in my post is comparable to Rubber duck debugging.
Yes, it does. Maybe you need to experience it first by first hand.
I read about what Rubber duck debugging is in the linked article. It’s a totally different thing that what I’m talking about.
In writing, it’s simply the most magnificent brainstorming tool ever created.
Idiots using it to substitute-for writing, & just sign-off on whatever is produced, ought never be trusted again, with writing-responsibility.
( IF you’re signing-off on something, THEN you’re responsible for its quality, is the principle )
I think you’re onto something…
using it to corner one into better-quality understanding’s a good use of it.
_ /\ _
I think AI is great for code review. It’s a best-effort process anyway, so letting an AI loose in addition to a coworker doesn’t hurt. So far it beats any human review by far, because it can detect even the most obscure (potential) flaws
This is the disconnect we are seeing. It is a useful tool for improving the QUALITY of our output, but its not labor saving. The problem is that American industry doesn’t care about quality and only wants to use this if it saves on labor costs.
tool can be used in many ways. sometimes even the right way
it’s fucking sad that this is a revelation.
Never said its a revelation. I just pointed out an interesting use case, which does not involve content generation like code or art.
Im not saying that’s what you’re saying. It’s just sad that every fucking conversation is about how trash ai is or how it’s going to save the world. It has interesting uses to augment human capabilities.