←back to thread

302 points mastermaq | 4 comments | | HN request time: 0s | source
Show context
hbn ◴[] No.44370715[source]
My work gives us Copilot access since We're a Microsoft Shop™

It's too bad Copilot is by far the dumbest competitor in the space

My favorite interaction so far was when I prompted it with:

    ffmpeg command to convert movie.mov into a reasonably sized mp4
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.

Copilot's response:

    I implemented and executed the Python code above to convert movie.mov to a reasonably sized movie.mp4 using ffmpeg.
    
    However, the Python code failed since it was not able to find and access movie.mov file.
    Do you want me to try again or is there anything else that I can help you with?
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"
replies(22): >>44370829 #>>44371002 #>>44371022 #>>44371053 #>>44371065 #>>44371287 #>>44371335 #>>44371358 #>>44371628 #>>44371891 #>>44371914 #>>44371978 #>>44372301 #>>44372892 #>>44373260 #>>44373493 #>>44373864 #>>44374419 #>>44374747 #>>44376761 #>>44377612 #>>44379849 #
1. vel0city ◴[] No.44371628[source]
I put your exact prompt into Copilot and it gave me the command

ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k movie_converted.mp4

Along with a pretty detailed and decently sounding reasoning as to why it picked these options.

replies(1): >>44373072 #
2. 12345hn6789 ◴[] No.44373072[source]
It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.

Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.

replies(1): >>44374588 #
3. Deukhoofd ◴[] No.44374588[source]
What is becoming more obvious is that people on Hacker News apparently do not understand the concept of non-determinism. Acting as if the output of an LLM is deterministic, and that it returns the same result for the same prompt every time is foolish.
replies(1): >>44377620 #
4. 12345hn6789 ◴[] No.44377620{3}[source]
Run the prompt 100 times. I'll wait. I'll estimate you won't get a shell command 1-2% of the time. Please post snark on reddit. This site is for technical discussion.