Have you found an actually practical use of or place in a workflow of yours using an LLM?
from cheese_greater@lemmy.world to nostupidquestions@lemmy.ca on 18 Feb 02:42
https://lemmy.world/post/43269773

#nostupidquestions

threaded - newest

lowspeedchase@lemmy.dbzer0.com on 18 Feb 02:45 next collapse

Automated QA

wesker@lemmy.sdf.org on 18 Feb 02:57 next collapse

Completely automated? No. But do have workspace/codebase instructions files I’ve been iterating on for months, that can produce pretty damn reliable results for prompted feature requests. And that’s saying a lot, because I have very specific and intentional design preferences.

Speculater@lemmy.world on 18 Feb 02:58 next collapse

I have a calorie counting app that I usually scan barcodes of products with, but if it’s a whole meal at a restaurant will use AI to give a decent estimate of macros and calories. Worse case is I under/over count occasionally but use it so rarely it doesn’t impact months long trends.

DrBob@lemmy.ca on 18 Feb 02:58 next collapse

My workplace is currently rooting out every AI feature they can find and removing it. I work in a critical infrastructure industry and they can’t risk confidential information winding up in an LLM training environment.

fossilesque@mander.xyz on 18 Feb 03:19 next collapse

I make it do my yaml and sidecars for obsidian md documents and files.

Shadow@lemmy.ca on 18 Feb 03:38 next collapse

Yes, I use it all day at work for troubleshooting unfamiliar code in a giant monorepo, and writing terraform / k8s files. Claude code is easily a 20x time multiplier for investigating things I’m not fluent in, and probably 3x for writing new stuff.

Note that I’m not vibe coding. Everything is peer reviewed and I understand what it’s all doing. Many people seem to assume if you use an AI then you’re just vibe coding, that isn’t the case.

wesker@lemmy.sdf.org on 18 Feb 03:56 collapse

I don’t understand the bad rap “vibe coding” gets. People seem to assume that everyone is incapable of reviewing the code that has been generated at the end of an exploratory “vibe code” session, and that it doesn’t go through legitimate PR processes if it’s deemed useful.

Shadow@lemmy.ca on 18 Feb 04:48 collapse

There’s no room for nuance on the internet anymore. AI == Bad

j4k3@piefed.world on 18 Feb 03:43 next collapse

Reading with text to speech. I have a script I wrote with the open weights Silero a couple of years ago. There are better now, but it helps with big blocks of text.

Qwen coder models are better than stack exchange for code snippets. They are also good at OCR, formatting structured data, and such.

I like messing around with alignment as a puzzle game using the vocabulary and a bunch of scripts.

Qwen can do basic FreeCAD macros. It is also good at lisp with emacs as one would expect given the history of lisp.

I have a setup for chatting in various modes like friend, mentor, philosophy, and lewd. I have a conversational diffusion setup where the dialog is in the images. I also have a setup that started as a writing partner but ended up as my science fiction universe. It does not write much ahead of me. It is like sentence completion and writes in my voice. I use segmentation stuff with images. I can use it to break up objects in image layers far better than I can brute force in gimp. I’ve only run open weights models on my own hardware.

zigmus64@lemmy.world on 18 Feb 04:15 next collapse

I’m not a coder… but I’m somewhat code literate. I can follow logical structures and understand flow control and conditional loops… I am, however not proficient in coding. Any skills I may develop over a period of intense work are lost as soon as I move on from the project I learned them for. It was time consuming, laborious, and left me completely spent at the end of my work day.

LLMs provide me with a way to generate code to automate small analysis projects. Hand jamming the data would take (almost) literally forever, and automating it always seems like more work than it’s worth. LLMs give me a way to make functional code quickly and efficiently… my work doesn’t require enough coding work to develop proficiency but has always been a tool.

In my personal life, it’s enabled me to tinker with my home computing setup in ways I never would have been able to. I’ve fucked with AI image generation in ComfyUI, and have dicked around with hosting LLMs locally… I even set up a way for me to interface with a locally hosted LLM remotely from my cell phone… I did all of this using Claude AI walking me through setup and everything.

My next project is to setup a local LLM for personal financial management. The LLM will work to automate categorization of transactions and will feed it into a Python Panel setup to track spending and aid long term planning.

…medium.com/analyzing-personal-finances-locally-w…

axx@slrpnk.net on 18 Feb 07:43 collapse

You went something like Actual or Maybe for this last one. (They are both open source)

zigmus64@lemmy.world on 18 Feb 12:41 collapse

Thanks, that looks awesome.

lvxferre@mander.xyz on 18 Feb 04:48 next collapse

I’ve been barely doing it any more due to environmental concerns, but I’ve used LLMs for translations in two ways:

  1. If I understand well a sentence in the source language, but I’m having a really hard time phrasing it in the target language in a natural way, sometimes I ask a gen model to translate it. Just for ideas; I never copy the output, and often I simply use one or two words from it, so incorrect output is not an issue.
  2. Glorified conjugation/declension dictionary. Specially useful if you input non-lemma forms. I only use it in situations where I don’t quite remember the conjugation/declension of the word in question, never if I don’t know it, so incorrect output sticks out as a sore thumb.
vaccinationviablowdart@lemmy.ca on 18 Feb 05:26 next collapse

in health care workplaces, a lot of people are using aiscribe tools to assist documentation.

there is a lot of bad/fake attempts to apply ai to health, but this one is already almost universal. it is fully embraced by the workforce because paperwork has come to crisis level over the past decade and this is the first potential solution. everybody is using it whether you think you’ve given consent or not. most especially in office-type visits where you sit in a room and talk to someone.

god knows what the data is being used to train for.

JoeBigelow@lemmy.ca on 18 Feb 13:34 collapse

I felt a little uneasy the first time my provider asked for consent to use the AI scribe, I asked for a transcript and it was dead accurate to the conversation. I hadn’t considered it’s value as training data, but I was glad that my overworked Drs had a tool to alleviate some of the workload. The future is going to be so much fun.

[deleted] on 18 Feb 06:51 next collapse
.
Duke_Nukem_1990@feddit.org on 18 Feb 07:15 next collapse

No

neutronbumblebee@mander.xyz on 18 Feb 07:50 next collapse

Its great for yearly performance review questions. I just feed in keywords and it generates a relentless tide of goals, metrics and progress reports which no one actually reads. Lovely. For anything technical nope. I save much more time just working problems myself. At least I learn from my mistakes.

CanadianCorhen@lemmy.ca on 18 Feb 16:17 collapse

Couple areas:

  1. I’m doing a bathroom reno, I bounce ideas off it, and ask about the particularities of the local building code
  2. I use it as an AA for my work, reviewing my emails
  3. I use it for coding my home assistant automations,