Visual Studio: user frustrated by auto-completion
Hi, I'm a bit frustrated by Visual Studio auto-completion feature. For example I noticed it suggests only words that matches perfectly the prefix you've just typed in: for example, if I mistype a character, the word I intend to write is no more suggested as completion. As a result, the user experience is not smooth as it should be. I don't remember having this kind of problems in Eclipse something like 10 years ago.
Am I the only one frustrated by this behavior? Is it only a problem on my installation?
10 Replies
send a feedback to the team
and no I dont think its a problem in your install, the logic used to determine what to suggest is probably very limited and that's all.
Understand. Yep I know this feature, but since it's quite difficult to formulate the problem in a manner clear enough for the ticket not to get closed, I excluded that possibility.
Anyway, as usual Visual Studio is very strange: they push a lot on fancy stuff like copilot, intellisense code prediction, AI and so on, and then they miss this basic stuff, which produces an extreme boost in the developer productivity
Unknown Userā¢10mo ago
Message Not Public
Sign In & Join Server To View
@Metasyntactic
yes?
@alkasel#159 please file issue at github.com/dotnet/roslyn. Please give an example of what you wrote, and what happened. Ideally with screenshots. Thanks š
There can be huge gaps between the things you typed and the things you expected in the completion list. So for intellisense development, the algorithms have to generate a list based on known patterns and reasonable scenarios. The Visual Studio release you are talking about also matters, as I believe the latest release already added more algorithms. You can report the issue to Microsoft like others suggested, and if you captured a very unique pattern/scenario then it might be considered.
The AI based approaches like GitHub Copilot, however, don't rely on predefined algorithms but more rely on the statistics the AI learned during trainings. So they work quite differently, and can be better or worse in real world scenarios I think.
(Nothing in particular has changed with completion list calculation in roslyn recently wrt filtering)
Right. But end users usually are not familiar with the differences among IntelliSense, IntelliCode, GitHub Copilot on a finer grade.
So when people complained that the list did appear to be useful, it is challenging to tell what exactly happened.
Please try to write a clear issue report, as that's the only way to make your case feasible to work on.
Thanks for all the answers. I'll fill an issue report as soon as I can
š