Table of Links Abstract and 1 Introduction Abstract and 1 Introduction 2. Prior conceptualisations of intelligent assistance for programmers 2. Prior conceptualisations of intelligent assistance for programmers 3. A brief overview of large language models for code generation 3. A brief overview of large language models for code generation 4. Commercial programming tools that use large language models 4. Commercial programming tools that use large language models 5. Reliability, safety, and security implications of code-generating AI models 5. Reliability, safety, and security implications of code-generating AI models 6. Usability and design studies of AI-assisted programming 6. Usability and design studies of AI-assisted programming 7. Experience reports and 7.1. Writing effective prompts is hard 7. Experience reports and 7.1. Writing effective prompts is hard 7.2. The activity of programming shifts towards checking and unfamiliar debugging 7.2. The activity of programming shifts towards checking and unfamiliar debugging 7.3. These tools are useful for boilerplate and code reuse 7.3. These tools are useful for boilerplate and code reuse 8. The inadequacy of existing metaphors for AI-assisted programming 8.1. AI assistance as search 8.1. AI assistance as search 8.2. AI assistance as compilation 8.2. AI assistance as compilation 8.3. AI assistance as pair programming 8.3. AI assistance as pair programming 8.4. A distinct way of programming 8.4. A distinct way of programming 9. Issues with application to end-user programming 9. Issues with application to end-user programming 9.1. Issue 1: Intent specification, problem decomposition and computational thinking 9.1. Issue 1: Intent specification, problem decomposition and computational thinking 9.2. Issue 2: Code correctness, quality and (over)confidence 9.2. Issue 2: Code correctness, quality and (over)confidence 9.3. Issue 3: Code comprehension and maintenance 9.3. Issue 3: Code comprehension and maintenance 9.4. Issue 4: Consequences of automation in end-user programming 9.4. Issue 4: Consequences of automation in end-user programming 9.5. Issue 5: No code, and the dilemma of the direct answer 9.5. Issue 5: No code, and the dilemma of the direct answer 10. Conclusion 10. Conclusion A. Experience report sources A. Experience report sources References References 9.4. Issue 4: Consequences of automation in end-user programming In any AI system, we need to consider the consequences of automation. End-user programmers are known to turn to local experts or gardeners (end-user programmers with interest and expertise in programming who serve as gurus in the end-user programming environment) when they are unable to solve a part of the problem (Nardi, 1993; Sarkar & Gordon, 2018). Task-orientation tendencies combined with challenges of completing their tasks easily also leaves end-user programmers with limited attention for testing, or carefully learning what is going on with their programs. Assuming that LLMs and associated user experiences will improve in the coming years, making end-user programming faster with LLMs than without, it is tempting to wonder whether the programmer can be persuaded to invest the saved time and attention to aspects such as learning or testing their programs; if so, what would it take to influence behaviour changes? Another question is in the role of such experts. We conjecture that LLMs or similar AI capabilities will soon be able to answer a sizeable fraction of questions that end-user programmers will go to local experts for. An open question therefore is how the ecosystem of end-user programmers in organizations will change in their roles, importance and specialities. For example, will gardeners take on the role of educating users on better taking advantage of AI? If so, how can we communicate the working of such AI systems to technophile users and early adopters, so they can enable others in the organization? Authors: (1) Advait Sarkar, Microsoft Research, University of Cambridge (advait@microsoft.com); (2) Andrew D. Gordon, Microsoft Research, University of Edinburgh (adg@microsoft.com); (3) Carina Negreanu, Microsoft Research (cnegreanu@microsoft.com); (4) Christian Poelitz, Microsoft Research (cpoelitz@microsoft.com); (5) Sruti Srinivasa Ragavan, Microsoft Research (a-srutis@microsoft.com); (6) Ben Zorn, Microsoft Research (ben.zorn@microsoft.com). Authors: Authors: (1) Advait Sarkar, Microsoft Research, University of Cambridge (advait@microsoft.com); (2) Andrew D. Gordon, Microsoft Research, University of Edinburgh (adg@microsoft.com); (3) Carina Negreanu, Microsoft Research (cnegreanu@microsoft.com); (4) Christian Poelitz, Microsoft Research (cpoelitz@microsoft.com); (5) Sruti Srinivasa Ragavan, Microsoft Research (a-srutis@microsoft.com); (6) Ben Zorn, Microsoft Research (ben.zorn@microsoft.com). This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license. available on arxiv available on arxiv