←back to thread

302 points mastermaq | 2 comments | | HN request time: 0.466s | source
Show context
hbn ◴[] No.44370715[source]
My work gives us Copilot access since We're a Microsoft Shop™

It's too bad Copilot is by far the dumbest competitor in the space

My favorite interaction so far was when I prompted it with:

    ffmpeg command to convert movie.mov into a reasonably sized mp4
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.

Copilot's response:

    I implemented and executed the Python code above to convert movie.mov to a reasonably sized movie.mp4 using ffmpeg.
    
    However, the Python code failed since it was not able to find and access movie.mov file.
    Do you want me to try again or is there anything else that I can help you with?
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"
replies(22): >>44370829 #>>44371002 #>>44371022 #>>44371053 #>>44371065 #>>44371287 #>>44371335 #>>44371358 #>>44371628 #>>44371891 #>>44371914 #>>44371978 #>>44372301 #>>44372892 #>>44373260 #>>44373493 #>>44373864 #>>44374419 #>>44374747 #>>44376761 #>>44377612 #>>44379849 #
NordSteve ◴[] No.44371358[source]
I pasted your prompt:

  ffmpeg command to convert movie.mov into a reasonably sized mp4
into the Copilot app just now. Here's the response:

  ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k movie_converted.mp4
with this explanation

  Here’s the breakdown:
  - -vcodec libx264: Uses the H.264 video codec, great for compression and compatibility.
  - -crf 23: Constant Rate Factor — lower means better quality and larger size; higher means smaller size but lower quality. 23 is a nice middle ground.
  - -preset medium: Controls encoding speed vs. compression efficiency. You can try slow or faster depending on your needs.
  - -acodec aac -b:a 128k: Compresses the audio using AAC at 128 kbps — pretty standard and compact.
  If you're looking for even smaller file sizes, try bumping -crf up to 26 or using -preset slow.
  Want to bake in scaling, remove metadata, or trim the video too? Happy to tailor the command further.
replies(5): >>44371528 #>>44372925 #>>44373007 #>>44374040 #>>44376405 #
Aurornis ◴[] No.44373007[source]
I did the same thing for several iterations and all of the responses were equally helpful.

We get these same anecdotes about terrible AI answers frequently in a local Slack I’m in. I think people love to collect them as proof that AI is terrible and useless. Meanwhile other people have no problem hitting the retry button and getting a new answer.

Some of the common causes of bad or weird responses that I’ve learned from having this exact same conversation over and over again:

- Some people use one never-ending singular session with Copilot chat, unaware that past context is influencing the answer to their next question. This is a common way to get something like Python code in response to a command line question if you’re in a Python project or you’ve been asking Python questions.

- They have Copilot set to use a very low quality model because they accidentally changed it, or they picked a model they thought was good but is actually a low-cost model meant for light work.

- They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones.

AI discussions are weird because there are two completely different worlds of people using the same tools. Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad. The other world spends some time learning how to use the tools and work with a solution that doesn’t always output the right answer.

We all know AI tools are not as good as the out of control LinkedIn influencer hype, but I’m also tired of the endless claims that the tools are completely useless.

replies(9): >>44373253 #>>44373471 #>>44373558 #>>44373634 #>>44373822 #>>44375598 #>>44375719 #>>44376790 #>>44377124 #
muppetman ◴[] No.44373634[source]
The "pick your model" thing is so stupid.

"How dumb do you want your AI to be?"

"Why do I have to select?"

"Because smart costs money"

"So... I can have dumb AI but it's cheaper?"

"Yes"

"How would the average person know which to pick?"

"Oh you can't know."

I hope they can invent an AI that knows which AI model my question should target cheaply.

replies(7): >>44373707 #>>44374320 #>>44374834 #>>44376182 #>>44376980 #>>44377115 #>>44380649 #
1. addandsubtract ◴[] No.44374834[source]
> I hope they can invent an AI that knows which AI model my question should target cheaply.

Isn't that the idea of OpenRouter?

replies(1): >>44375337 #
2. oersted ◴[] No.44375337[source]
Not exactly, but yeah. OpenRouter is a unified API, directory and billing system for LLM providers.

I think you are getting confused by the term "Model Routing", which to be fair OpenRouter does support, but it's a secondary feature and it's not their business focus. Actually OpenRouter is more focused on helping you choose the best provider for a specific open model based on their history of price, speed, reliability, privacy...

The model routing is simply provided by NotDiamond.ai, there are a number of other startups in this space.

https://openrouter.ai/docs/features/model-routing