For the past several weeks I haven been busy rewriting parts my application from Java to Javascript, and I used the free trial of Github Copilot to get assistance. So here is my report about this adventure.
First I need to admit that even after my 50 years in IT, I was really astonished. Sometimes I almost had the impression: wow, he is reading my mind. At some point, it seemed like he came up with an analogy that he detected all by himself. As Clarke’s 3rd law says “Any sufficiently advanced technology is indistinguishable from magic”.
The user interface is great and simple: I type a comment, he writes one or more lines of suggestions in grey font, and I accept it with the tab key (turning the grey into black), or reject it with cursor down. There is even a limited sort of conversational mode, when he responds with a comment line and I can alter it before resubmitting.
Let me distinguish between cases when I knew what I would do, and cases when I did not. Since I am much less familiar with Javacript than with Java, there were many instances of the latter type. But just when I was very focussed on the difficult new things, I made lots of silly mistakes or ‘slips of pen’ in the former type, as well.
When I know what to do
The copilot saves me a lot of laborious typing work.
- He is good at verbose comments.
- Using ‘console.log’ becomes effortlessly detailed.
- He adds similar variations whenever there is a pattern like ‘x’ and ‘y’, ‘save’ and ‘load’, ‘left’ and ‘right’, then he is quick to suggest the second one.
- He makes additions, but he did not make the consistent systematic alterations that are so often necessary when adapting old code. For example, when I repeatedly had to change “translation.x” into “translation[0]”, he would not help me, only with the other half: .y to [1] as an added new line. OK the interface is for additions and great.
- He did not help me against the silly slips mentioned above, or check for correctness and consistence.
Of course I have been pampered by my previous environment: Java with Eclipse instead of Javascript with VS Code. In that other language, little typos, omissions, inconsistencies could be immediately marked. The long chains of references and pointers could be checked, and if missing prerequisites were the cause, a one-click fix could be offered. And the ‘intellisense’ drop-down choice (after typing a dot behind an object’s name) could be shorter and more pertinent.
Of course that older language was even more laborious to type. But when a slip goes unnoticed, it may cost much more time to debug it. And the copilot does not always care for syntactically correct code. The code is just is similar to frequently used correct samples.
While the burden of typing is relieved, there is another, new, type of strain: Constant attention to the suggestions. There is a shift from an active mode to a re-active one whose consequences are difficult to judge. Maybe it will unbalance the two fundamental modes of brain operation — at least I personally hate it when I constantly need to surveil something instead of doing something, and the fatigue leads to mistakes.
But maybe it suits modern people, who may be more comfortable with to re-acting to, and distrusting, the busy input streams. Maybe it will strengthen our debugging capabilities, maybe even the collaborative capabilities and the coping with others’ code. (But as for the copilot’s verbose superficial comments, I doubt that it will contribute to the understanding of such code and not just please reviewers or even lead them astray).
The problem with the small slips and silly mistakes is that they are hard to spot in my own code and even harder to notice in the copilot’s suggestions, just because they look so similar to the correct code. And so it happened several times that I failed for some faulty code and then spent many hours to debug it.
When I do not know what to do
The other case is when I do not know how the correct syntax should look like, or when I do not even understand some of the concepts, which happened quite often with the less familiar programming language. Then, of course, I am grateful for any hint, and so sometimes the copilot’s sample code contained at least a variation of what I needed, and so I could continue trying and searching.
Here I need to grumble a bit (“everything used to be better” :-)) why this is not my favorite style of learning. I loved the combination of User’s Guide and User’s Reference. The former was a top-down overview, and the latter was a concise description for the individual building blocks, bottom-up, to look them up, just in time.
By contrast, an explanation by a runnable code-snippet is not always the best way to convey a difficult cryptic construct, with anonymous tons of brackets and braces, or with lots of surrounding extra code just to make it run in the browser. Some day we realized that, formerly we knew why something did not work, and now we do not know why it does work. With the pressure towards frameworks, this trend is accelerating. Now it is much faster to just try it out rather than first thinking it through. I admit I often find myself doing this, too, in particular when it involves just toggling a binary option. But I found that sometimes it costs much more time and effort to eventually understand such wicked constructs.
Copilot gave me multiple of these cumbersome experiences. Often I was not able to communicate to him what I wanted. (I also tried the new Bing Chat where the request could at least be refined, but their code snippets were rarely more useful.) The most frustrating thing was not even when the copilot started to confabulate and wrote line after line with, e.g. exotic options for a Redo manager. The most frustrating was when he just imitated and ruminated my own clueless attempts. The only advantage over a google search was that he brought his code snippet examples into my own context.
Personal context?
The interesting question is how much he learned to adapt to my personal context, vs. how much standard common patterns he used. But this is difficult to tell. He often reused my recently entered lines. My program contains ‘nodes’ and ‘edges’ which in the older parts were called ‘topics’ and ‘assocs’. I was surprised that he instantly complemented my ‘topics’ with ‘associations’. But then, other people call their stuff similarly. The trial is tied to my Github account, and I don’t know if he knows my other program versions up there in the repository which are not on my local VS Code. Indeed I do not know what he knows.
Similarly, how much did he do on my own computer, and how much did he do ‘at home’ on the giant machines? Below is a screenshot of my network statistics during average work with him but not explicitly prompting him. It seems quite a lot.
Conclusion
For me, one big benefit of the copilot is that he compensates for some shortcomings of other elements of our trade. For example, the difficulties of languages like Javascript, where he approximates the correct snippets by common, frequently used patterns. Or the lack of good atomic reference information, which he replaces by example snippets right into the context at hand. The other big benefit is to save typing time for the small share of very frequent patterns.
This is IMHO not worth the massive energy consumption. But I do see a potential in some use cases that could probably run on the user’s machine: For one, searching and adapting the user’s own similar precedence snippets. And second, tracing and following all the references and pointer chains to check if a code statement will meet the prerequisites or prepare them otherwise. But this would probably not need much similarity-based machine-learning. Mere similarity, IMHO, is at odds with correct code.
And the goal of the massive investments is probably not the support of individual needs but the reduction of costly humans by machines, who won’t strike.