sofiechan home

Intelligent Use of LLMs

anon_51e said in #2971 4w ago:

I would like to start a thread to share the methods we employ to use LLMs in a way that enhances our abilities, rather than just lazily outsources tasks to them. The heuristic for the techniques I am looking for would be if after employing the technique, are you more capable or knowledgable than you were before. In other words, even if you never had access to an LLM again, would you still be better off.

To start off here are some strategies and the contexts in which I have employed them:

In studying Ancient Greek, I take a given sentence and ask the LLM to *not* translate it, but instead I will give me best shot at a translation, and it should you the socratic method to guide me to better understanding of the text through questions. This typically reminds me of grammar I have forgotten, gives me more nuanced understanding of vocabulary, or just helps me identify parsing mistakes.

Learning hot to use a specific libraries that are publicly available through more refined examples and targeted questions. When trying to use an library or API, you typically just want some answers to common questions so you'll know how to make design decisions. LLMs are superior to reading docs, as you can ask more targeted questions to build your initial understanding and help you know how to go through the docs.

Getting feedback on written works with nuanced context. I was writing a poem for someone, and I was able to do the work myself to write the first few passes, then give it to an LLM and ask for critiques. It was able to point out the weakest aspects of the poem so I could go back and target those. Similarly, I was in negotiations and discussing the situation helped me rephrase a proposal in a way that made it easier to fulfill. Previously, I wouldn't have known to phrase it this way as it wasn't a common situation for me.

Web searches in general to answer idle curiosities. From mathematical topics I vaguely understand where I can now get a primer custom tailored to my knowledge, to complex theological dogma questions and exploring the different viewpoints. Previously, I would either just not look for this information because it would involve more reading and study than I would care to do. Now the bar is lower for learning small interesting tidbits. This may be the weakest of the techniques as I am unsure whether this is long term beneficial, rather than just not knowing things. I like to believe wide knowledge will be useful though.

When coding, LLMs are good to help make a plan of attack and consider tradeoffs. Basically just as a rubber duck that know about all the standard libraries and system calls and faintly recalls every blog post ever written. Previously, I would maybe write in a text file what my plan was before I started executing, but typically I would just start and thus my efforts would be less intentional. The LLMs just make this all a bit more intentional and help me realize problems before they happen.

In techniques that don't work, I have yet to find their code good for any existing codebase, but they can spit out a proof of concept pretty easily with *heavy* guidance and guardrails about technical decisions and priorities. I don't count this use case for this thread, as it doesn't pass our heuristic of whether it will be useful long term even without access to an LLM.

Please share your experiments and techniques, both successful and failed.

I would like to star

anon_522 said in #2978 4w ago:

Mostly I use LLMs (claude) for fairly constrained code-drafting, in which it implicitly looks up all the specifics of how to call this or that and threads the pieces together. It's basically like stackoverflow for your particular problem. This only works for fairly simple straightforward code though. I also use it as a tutor to explain things to me. I like to think I actually know something more after doing this that I didn't know beforehand.

Mostly I use LLMs (c

anon_52a said in #2999 3w ago:

Most of my cases boil down to it just being a search engine for common patterns (in code most commonly, e.g., architectural scenarios--"i have this problem. i was thinking this approach. how would it look in code if i tried this? is this common? what's the standard?).

Really just asking myself "what would I look at?" then giving it all of that.

The above covers basically everything but sometimes when I'm completely confused on what the problem even is (can't find a communicable framing), pointing its focus at different aspects of what we've been talking about has been a neat way of finding new directions to explore in the problem space.

But the above is all mostly just standard conversational debugging, not really specific to LLMs and no different than interacting with another person imo.

I've been toying with the idea of some personalized deep research workflow that generates reports on topics inferred from your online activity (supposing preferences in a text information environment would be more revealing through things like search history etc. than through what you ask LLMs directly), but I haven't gotten around to really fleshing it out.

I'm betting there's a lot of untapped power in LLMs organizing things for our consumption as I haven't seen a ton floating around on all of that.

referenced by: >>3033

Most of my cases boi

anon_531 said in #3026 2w ago:

I've been an enthusiastic adopter of Deep Research for generating reports that provide a launching off point while, "doing the work." The fact that it sites its sources, and provides links to the primary material makes it a $200/mo graduate student who works for me. This falls into "outsourcing tasks," however.

Other areas where it's been powerful: I needed a physical therapist for my neck, but my co-payment is $200 per appointment. I used GPT-4.5 Deep Research to write a progressive custom therapy modality using the tools and methodologies that I knew worked.

Similarly I need to cut weight but I didn't want to use Ozempic. So I told GPT my macros and had it design me a custom meal plan. I didn't "stick to the meal plan" but the process of creating it gave me a lot of color on the kinds of foods and meals that I could fit in my macros.

referenced by: >>3033

I've been an enthusi

anon_532 said in #3027 2w ago:

LLMs are good for particular sorts of searches. I wanted to find a particular LIFE photo spread that Google wouldn't turn up, but ChatGPT would.

It works good for translation from particular languages. This is a good use. It can't hallucinate things, probably. It has the text. You can tune it. You can say: "Translate it in the style of Enid Blyton," "Make it contemporary, but not faux-naif blog voice," "Tone down the contemporary language and give me a tone something like Donald Keene would do it."

I think it works good for editing work in foreign languages. I write need to write instant messenger messages in Korean and French relatively often. DeepSeek in particular is good at noting poor word choice.

In other words, this is to say: give it the text and tell it to do something. It's bad at coming up with novel ideas and so on.

LLMs are good for pa

anon_522 said in #3033 2w ago:

>>2999
>just standard conversational debugging, not really specific to LLMs
I find at least 50% of the time when coding, it doesn't do what I need unless I write out a really detailed prompt that goes into autistic line-by-line descriptions of what needs to happen. And then, it only gets it right 50% of the time. And usually, by the time I've written all that out, I already have the solution and can just do it myself. It's like a rubber duck assistant. "ok claude today we're doing XYZ and here's what we currently have and how we need to change it to get there and here's the solution and it needs to go here and here and oh fuck it I'll just do it myself".

That said, it's pretty good at reading stuff and pulling out relatively obvious but high-effort things. Like "hey claude can you find all the places we do anything like XYZ". It's not quite as reliable as grep, but smarter. The main usecase is still the "talking encyclopedia" to run ideas by and get the state of the literature on the subject, and occasionally general-purpose omni-translator (for translating english into code for example). But it's not a thinker.

>>3026
I've not tried deep research and I don't often find myself with research assistant type problems so I'm not sure how I would use it. But I have heard a bunch of good things. But yeah like you say that's just like hiring someone to thoroughly look things up for you.

I find at least 50%

You must login to post.