←back to thread

858 points cryptophreak | 1 comments | | HN request time: 0.251s | source
Show context
Vox_Leone ◴[] No.42935485[source]
I call it 'structured prompting' [think pseudo-code]. It strikes a nice balance between human-readable logic and structured programming, allowing the LLM to focus on generating accurate code based on clear steps. It’s especially useful when you want to specify the what (the logic) without worrying too much about the how (syntax and language-specific details). If you can create an effective system that supports this kind of input, it would likely be a big step forward in making code generation more intuitive and efficient. Good old UML could also be used.

Example of a Structured Pseudo-Code Prompt:

Let’s say you want to generate code for a function that handles object detection:

'''Function: object_detection Input: image Output: list of detected objects

Steps: 1. Initialize model (load pretrained object detection model)

2. Preprocess the image (resize, normalize, etc.)

3. Run the image through the model

4. Extract bounding boxes and confidence scores from the model's output

5. Return objects with confidence greater than 0.5 as a list of tuples (object_name, bounding_box)

Language: Python'''

replies(1): >>42935688 #
1. yazmeya ◴[] No.42935688[source]
Why not just give it the desired function signature and comments in the function body, in Python?