美文网首页
Copilot Experience

Copilot Experience

作者: 程序员札记 | 来源:发表于2023-12-24 08:45 被阅读0次

    Github Copilot Sharing

    Introduction of Github Copilot (generate it by copilot)

    GitHub Copilot is an AI pair programmer that helps you write code faster and with less work. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It can write tests, too. GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex is a descendant of GPT-3 and has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub. GitHub Copilot works with a broad set of frameworks and languages, including Python, JavaScript, TypeScript, Ruby, and Go, and is powered by OpenAI Codex.

    Setup

    • Create an github.com account if needed or login via your github.com account
    • Authorize via SSO
    • At that point, you’re not only in the organization, but onboarded for GitHub Copilot.
    • Go to https://docs.github.com/en/copilot/quickstart and follow the instructions to install the plug-in for your IDE.

    Note:

    • You need to activate your GitHub account before using it. You can do this through the "Your GitHub launch code" email sent by GitHub.
    • Copilot support the following IDEs and versions:
    * IntelliJ IDEA Community Edition (2021.2.1 or later)
    * IntelliJ IDEA Ultimate Edition (2021.2.1 or later)
    * Visual Studio Code (1.60.0 or later)
    * Visual Studio (16.10 or later)
    * Visual Studio for Mac (8.10 or later)
    * Neovim (0.5.0 or later)
    * Sublime Text (4.0 or later)
    * ...
    

    How to Use

    Features

    1 Auto prompt

    When you are coding, if you want to write a method, Copilot will help you to generate method content.

    Sample 1: Enter a meaningful method name

    protected AppMeta getAppMet
    

    Copilot will generate the corresponding code, and you can accept it by pressing the tab key.

    protected AppMeta getAppMeta(TaskContext context) {
            String appName = context.getInputString(Constants.APPNAME);
            AppMeta meta = metaRepo.findById(appName).orElse(null);
            if (meta == null) {
            throw new RuntimeException(String.format("Cannot find app meta information for %s", appName));
            }
            return meta;
            }
    

    Sample 2: Let Copilot generate code through code comments
    Loop through the list and query the completion status of the job

    // Traverse uncompletedItr for up to 600s, with an interval of 100s between each traversal. If there are still unfinished jobs after traversing, uncompletedJobIds will be returned.
    // If the job is completed, remove it from uncompletedJobIds.
    // If the job fails, return uncompletedJobIds directly.
    

    The generated code:

    // Traverse uncompletedItr for up to 600s, with an interval of 100s between each traversal. If there are still unfinished jobs after traversing, uncompletedJobIds will be returned.
    // If the job is completed, remove it from uncompletedJobIds.
    // If the job fails, return uncompletedJobIds directly.
            while (curTime <= 600){
                    try {
                    TimeUnit.SECONDS.sleep(100);
                    curTime += 100;
                    while (uncompletedItr.hasNext()) {
                    String statusLink = uncompletedItr.next();
                    Map<String, Object> result = zebraClient.getZebraRequestInfo(statusLink);
            String status = (String) result.get("executionStatus");
            if (STATUS_COMPLETED.equals(status)) {
            uncompletedItr.remove();
            } else if (STATUS_FAILED.equals(status)) {
            context.error("Job %s failed", statusLink);
            return uncompletedJobIds;
            }
            }
            } catch (InterruptedException ignored) {
            } catch (RuntimeException e){
            context.warn(e.getMessage());
            }
            if(uncompletedJobIds.isEmpty()) {
            break;
            }
            }
    

    2 More suggestions/completions

    If the prompt result is not what you want, you can display more suggestions by configuring the shortcut key or clicking refresh on the GitHub Copilot tab on the right side of Idea.

    //Generate a method for generating random numbers and return the result
    public static int generateRandomNu
    

    By clicking refresh in the github copilot tab, more suggestions will be prompted

    // solution 1
    public static int generateRandomNumber() {
            Random rand = new Random();
            int randomNum = rand.nextInt(1000000);
            return randomNum;
            }
    // solution 2
    public static int generateRandomNumber() {
            Random rand = new Random();
            return rand.nextInt(1000);
            }
    
    // solution 3
    public static int generateRandomNumber() {
            Random random = new Random();
            return random.nextInt();
            }
    
    // solution 4
    public static int generateRandomNumber() {
            Random random = new Random();
            return random.nextInt(100);
            }
    
    public static void main(String[] args) {
            System.out.println("Generate Random Number: " + generateRandomNumber());
            }
    
    // solution 5
    public static int generateRandomNumber() {
            Random random = new Random();
            //Generate a random number between 0 and 100
            int result = random.nextInt(100);
            return result;
            }
    

    You can also quickly open it through shortcut keys, the default is opt + return. If there is a conflict, you need to configure the corresponding github copilot shortcut key yourself. The specific configuration method is at the end of the article.

    3 Github Copilot Chat

    Open Github Copilot Chat through shortcut keys or the tab on the right side of Idea, enter relevant information, and get the answer. It can be used as chatGPT. The disadvantage is that it will not associate the context.
    For example, we can ask Copilot the following questions.

    //Generate a method for generating random numbers and return the result
    

    The answer generated by Copilot is as follows:

    import java.util.Random;
    
    public class NumberGenerator {
        public static int generateRandomNumber() {
            Random random = new Random();
            return random.nextInt(100); // This will return a random number between 0 (inclusive) and 100 (exclusive)
        }
    }
    

    Application scenes at work

    1. Code Completion

    Refer to the above section.

    2. Refactor Code

    For some complex logic and heavily coupled code, it can be refactored through Copilot. Refactor the following code, it does too much this in one method

    package com.garlic.app.tasks.adoptionDRP;
    
    import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
    import org.elasticsearch.common.Strings;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.context.ApplicationContext;
    import org.springframework.stereotype.Component;
    
    import java.util.List;
    import java.util.Objects;
    import java.util.stream.Collectors;
    import java.util.stream.Stream;
    
    @Component
    public class AutoCrexateDRPExecutorFactory {
        @Autowired
        List<AbstractAutoCreateDRPExecutor> executorList;
    
        private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);
    
        @Autowired
        private ApplicationContext appContext;
        public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels,final Boolean repeat){
            return prodPoolMetadataInfoModels
                    .stream()
                    .filter(e -> {
                        if(Strings.isNullOrEmpty(e.getStgPoolId())){
                            LOGGER.info(String.format("App %s prod pool %s do not have staging pool info",e.getAppName(),e.getResourceId()));
                            return false;
                        }
                        return true;
                    })
                    .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName)).entrySet().stream()
                    .map(entry -> entry.getValue()
                            .stream()
    //                            .filter(prodPoolMetadataInfoModel -> !prodPoolMetadataInfoModel.isProdPoolE2EPipeline())
                            .filter(prodPoolMetadataInfoModel -> {
                                if(!repeat && prodPoolMetadataInfoModel.isProdPoolE2EPipeline()){
                                    LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.",prodPoolMetadataInfoModel.getAppName(),prodPoolMetadataInfoModel.getResourceId()));
                                }
                                return  repeat ? repeat : !prodPoolMetadataInfoModel.isProdPoolE2EPipeline();
                            })
                            .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                            .entrySet().stream().map(e -> {
                                for (int i = 0; i < executorList.size(); i++) {
                                    if (executorList.get(i).checkExecutorType(e.getValue())) {
                                        return executorList.get(i).getBean(appContext, e.getValue(), entry.getKey());
                                    }
                                }
                                LOGGER.error(String.format("Create autoCreateDRPExecutor failed,appName %s commit id %s", entry.getKey(), e.getKey()));
                                return null;
                            }).filter(Objects::nonNull).collect(Collectors.toList())).flatMap(Collection::stream).collect(Collectors.toList());
        }
    }
    

    The generated result is as follows:

    package com.garlic.app.tasks.adoptionDRP;
    
    import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
    import org.elasticsearch.common.Strings;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.context.ApplicationContext;
    import org.springframework.stereotype.Component;
    
    import java.util.List;
    import java.util.Objects;
    import java.util.stream.Collectors;
    import java.util.stream.Stream;
    
    @Component
    public class AutoCreateDRPExecutorFactory {
        @Autowired
        List<AbstractAutoCreateDRPExecutor> executorList;
    
        private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);
    
        @Autowired
        private ApplicationContext appContext;
        public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels, final Boolean repeat) {
            return prodPoolMetadataInfoModels.stream()
                    .filter(this::hasStagingPoolInfo)
                    .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName))
                    .values()
                    .stream()
                    .flatMap(modelsForApp -> this.createExecutorsForApp(modelsForApp, repeat))
                    .collect(Collectors.toList());
        }
    
        private boolean hasStagingPoolInfo(ProdPoolMetadataInfoModel model) {
            if (Strings.isNullOrEmpty(model.getStgPoolId())) {
                LOGGER.info(String.format("App %s prod pool %s do not have staging pool info", model.getAppName(), model.getResourceId()));
                return false;
            }
            return true;
        }
    
        private Stream<AbstractAutoCreateDRPExecutor> createExecutorsForApp(List<ProdPoolMetadataInfoModel> modelsForApp, Boolean repeat) {
            return modelsForApp.stream()
                    .filter(model -> shouldCreateExecutor(model, repeat))
                    .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                    .values()
                    .stream()
                    .map(this::createExecutorForCommit)
                    .filter(Objects::nonNull);
        }
    
    
        private boolean shouldCreateExecutor(ProdPoolMetadataInfoModel model, Boolean repeat) {
            if (!repeat && model.isProdPoolE2EPipeline()) {
                LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.", model.getAppName(), model.getResourceId()));
                return false;
            }
            return repeat || !model.isProdPoolE2EPipeline();
        }
    
        private AbstractAutoCreateDRPExecutor createExecutorForCommit(List<ProdPoolMetadataInfoModel> modelsForCommit) {
            for (AbstractAutoCreateDRPExecutor executor : executorList) {
                if (executor.checkExecutorType(modelsForCommit)) {
                    return executor.getBean(appContext, modelsForCommit, modelsForCommit.get(0).getAppName());
                }
            }
            LOGGER.error(String.format("Create autoCreateDRPExecutor failed,appName %s commit id %s", modelsForCommit.get(0).getAppName(), modelsForCommit.get(0).getCommitId()));
            return null;
        }
    }
    

    We can see that the code is more readable and the logic is clearer.

    3. Generate Test Case

    Copilot can help you generate test cases. For example, we can generate test cases for the following code.

    package com.garlic.cntrpdlc.migration.task;
    
    import java.util.Random;
    
    
    public class NumberGenerator {
        // Generate a method for generating random numbers
        public static void NumberGenerator() {
            Random random = new Random();
            int number = random.nextInt(100);
            System.out.println(number);
        }
    }
    
    

    We can enter the following command in the Copilot chat window, Copilot will help us generate the test case.

    /tests NumberGenerator
    

    The generated result is as follows:

    import com.garlic.cntrpdlc.migration.task.NumberGenerator;
    import org.junit.jupiter.api.Test;
    import java.io.ByteArrayOutputStream;
    import java.io.PrintStream;
    import static org.junit.jupiter.api.Assertions.assertTrue;
    
    public class NumberGeneratorTest {
    
        @Test
        public void numberGeneratorOutputsRandomNumber() {
            ByteArrayOutputStream outContent = new ByteArrayOutputStream();
            System.setOut(new PrintStream(outContent));
    
            NumberGenerator.NumberGenerator();
    
            String output = outContent.toString();
            assertTrue(output.matches("\\d{1,2}\\r?\\n"), "Output should be a number between 0 and 99");
        }
    }
    

    4. Simplify

    In some cases, Copilot can help simplify the code.

    package com.garlic.cntrpdlc.migration.task;
    
    import java.util.Random;
    
    
    public class NumberGenerator {
      //Generate a method for generating random numbers and return the result
        public static void NumberGenerator() {
            Random random = new Random();
            int number = random.nextInt(100);
            System.out.println(number);
        }
    }
    
    

    After simplify:

    package com.garlic.cntrpdlc.migration.task;
    
    import java.util.Random;
    
    public class NumberGenerator {
      public static void generateRandomNumber() {
        System.out.println(new Random().nextInt(100));
      }
    }
    

    Copilot can also help simplify the code in other ways, such as simplifying the logic of the code, simplifying the code structure, etc.

    5. Fix code issue

    GitHub Copilot can assist in fixing code by providing suggestions for correcting syntax errors, improving code structure, or optimizing code performance. When you write code and encounter an issue, GitHub Copilot can provide a suggestion to fix the problem based on the context of the code. It can also suggest alternative approaches to a problem or improvements to the code. However, it's important to note that while GitHub Copilot can provide suggestions, it's up to the developer to review and decide whether to apply the suggested fixes.

      public int generateRandomNumber() {
            Random random = new Random();
            int randomNumber = random.nextInt(100);
            System.out.println(randomNumber/0);
            return randomNumber;
            }
    

    In this case, GitHub Copilot might suggest the following fix:

    public int generateRandomNumber() {
            Random random = new Random();
            int randomNumber = random.nextInt(100);
            System.out.println(randomNumber); // Removed division by zero
            return randomNumber;
            }
    

    6. Debugging

    Copilot can also help you debug code. For example, if you want to debug the following code, you can enter the following command in the Copilot chat window.

    debug FlowManager.FlowManager method
    

    The generated result is as follows:


    The FlowManager.FlowManager method in the FlowManager class is a constructor that is responsible for loading task flows from YAML files located in the classpath under the /jobs directory. It uses the PathMatchingResourcePatternResolver to find these files and then deserializes them into TaskFlow objects using the ModelHelper.YMAPPER.readValue method. These TaskFlow objects are then stored in the flowMap for later use.

    If you're experiencing issues with this method, here are a few things you might want to check:

    1. Ensure that the YAML files are correctly placed in the classpath under the /jobs directory.
    2. Check the format of the YAML files. They should be correctly formatted and should be able to be deserialized into TaskFlow objects.
    3. Check for IOExceptions. If there's an issue with reading the files, an IOException will be thrown.

    To debug this method, you can add logging statements to provide more information about what the method is doing. Here's an example of how you might modify the method to include some additional logging:

    public FlowManager() {
        ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(FlowManager.class.getClassLoader());
        try {
            Resource[] resources = resolver.getResources("classpath:/jobs/**/*.yaml");
            LOGGER.info("Found " + resources.length + " resources");
            for (Resource r : resources) {
                LOGGER.info("Processing resource: " + r.getFilename());
                TaskFlow taskFlow = ModelHelper.YMAPPER.readValue(r.getInputStream(), TaskFlow.class);
                flowMap.put(taskFlow.getName(), taskFlow);
                LOGGER.info("Added task flow: " + taskFlow.getName());
            }
        } catch (IOException e) {
            LOGGER.error("Cannot autowire the jobs from classpath", e);
            throw new InitializationException("Cannot autowire the jobs from classpath", e);
        }
    }
    

    This will log the number of resources found, the name of each resource being processed, and the name of each task flow added to the map. If an exception is thrown, it will be logged as well. This should give you a better idea of what's happening when this method is called.


    7. Generate document for java code

    GitHub Copilot can help generate documentation for Java code. It can provide suggestions for JavaDoc comments for classes, methods, and fields based on their names and context. Here's an example:

    /doc AutoCreateDRPExecutorFactory
    
    package com.garlic.app.tasks.adoptionDRP;
    
    import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
    import org.elasticsearch.common.Strings;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.context.ApplicationContext;
    import org.springframework.stereotype.Component;
    
    import java.util.List;
    import java.util.Objects;
    import java.util.stream.Collectors;
    import java.util.stream.Stream;
    
    /**
     * This class is responsible for creating executors for the AutoCreateDRP process.
     * It uses the Spring Boot framework and is annotated as a Component, meaning it is a singleton and can be autowired.
     */
    @Component
    public class AutoCreateDRPExecutorFactory {
        // List of executors to be used in the creation process
        @Autowired
        List<AbstractAutoCreateDRPExecutor> executorList;
    
        // Logger for this class
        private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);
    
        // Spring's ApplicationContext, used to get beans
        @Autowired
        private ApplicationContext appContext;
    
        /**
         * This method returns a list of executors for the given product pool metadata models.
         * It filters out models without staging pool info and groups them by app name.
         * Then, it creates executors for each app.
         *
         * @param prodPoolMetadataInfoModels List of product pool metadata models
         * @param repeat Boolean flag indicating whether to repeat the process
         * @return List of executors
         */
        public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels, final Boolean repeat) {
            return prodPoolMetadataInfoModels.stream()
                    .filter(this::hasStagingPoolInfo)
                    .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName))
                    .values()
                    .stream()
                    .flatMap(modelsForApp -> this.createExecutorsForApp(modelsForApp, repeat))
                    .collect(Collectors.toList());
        }
    
        /**
         * This method checks if a model has staging pool info.
         *
         * @param model The product pool metadata model
         * @return true if the model has staging pool info, false otherwise
         */
        private boolean hasStagingPoolInfo(ProdPoolMetadataInfoModel model) {
            if (Strings.isNullOrEmpty(model.getStgPoolId())) {
                LOGGER.info(String.format("App %s prod pool %s do not have staging pool info", model.getAppName(), model.getResourceId()));
                return false;
            }
            return true;
        }
    
        /**
         * This method creates executors for a given app.
         * It filters out models that should not have an executor created and groups them by commit id.
         * Then, it creates an executor for each commit.
         *
         * @param modelsForApp List of product pool metadata models for an app
         * @param repeat Boolean flag indicating whether to repeat the process
         * @return Stream of executors
         */
        private Stream<AbstractAutoCreateDRPExecutor> createExecutorsForApp(List<ProdPoolMetadataInfoModel> modelsForApp, Boolean repeat) {
            return modelsForApp.stream()
                    .filter(model -> shouldCreateExecutor(model, repeat))
                    .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                    .values()
                    .stream()
                    .map(this::createExecutorForCommit)
                    .filter(Objects::nonNull);
        }
    
        /**
         * This method checks if an executor should be created for a model.
         *
         * @param model The product pool metadata model
         * @param repeat Boolean flag indicating whether to repeat the process
         * @return true if an executor should be created, false otherwise
         */
        private boolean shouldCreateExecutor(ProdPoolMetadataInfoModel model, Boolean repeat) {
            if (!repeat && model.isProdPoolE2EPipeline()) {
                LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.", model.getAppName(), model.getResourceId()));
                return false;
            }
            return repeat || !model.isProdPoolE2EPipeline();
        }
    
        /**
         * This method creates an executor for a given commit.
         * It iterates over the list of executors and checks if the executor type matches the models for the commit.
         * If a match is found, it returns the bean for that executor.
         * If no match is found, it logs an error and returns null.
         *
         * @param modelsForCommit List of product pool metadata models for a commit
         * @return An executor if a match is found, null otherwise
         */
        private AbstractAutoCreateDRPExecutor createExecutorForCommit(List<ProdPoolMetadataInfoModel> modelsForCommit) {
            for (AbstractAutoCreateDRPExecutor executor : executorList) {
                if (executor.checkExecutorType(modelsForCommit)) {
                    return executor.getBean(appContext, modelsForCommit, modelsForCommit.get(0).getAppName());
                }
            }
            LOGGER.error(String.format("Create autoCreateDRPExecutor failed,appName %s commit id %s", modelsForCommit.get(0).getAppName(), modelsForCommit.get(0).getCommitId()));
            return null;
        }
    }
    

    The document of above code is generated by Copilot.

    8. Explain code logic

    Enter in Github Copilot Chat, Copilot will help you explain the logic of the code.

    /explain findAccessPoints
    

    For example, we can let copilot explain the logic of the following code.

     public List<String> findAccessPoints(String poolId, TessEnv env, String user){
            List<String> accessPoints = new ArrayList<>();
            Map<String, Object> nsvcs = tessClient.queryFcpObject(NSVC_API_VERSION, getNamespaceByPoolId(poolId, env),
                    NSVC_RESOURCE, null , env, user);
            Optional.ofNullable(nsvcs).flatMap(r -> Optional.ofNullable(nsvcs.get("items"))).ifPresent(items -> {
                ((List<Map<String, Object>>) items).stream().forEach(i -> {
                    Optional.ofNullable(i.get("metadata")).ifPresent(meta -> {
                        String nsvc = String.valueOf(((Map<String, Object>) meta).get("name"));
                        if(nsvc.contains("garlic.com")){
                            accessPoints.add(nsvc);
                        }
    
                    });
                });
            });
           return accessPoints;
        }
    

    Explain the logic of the code, the generated result is as follows.

    The `findAccessPoints` method is part of a Java class. This method is designed to find and return a list of access points for a given pool ID, Tess environment, and user.
    
    The method accepts three parameters: `poolId`, `env`, and `user`. `poolId` is a string representing the ID of the pool for which to find access points. `env` is an instance of `TessEnv`, representing the Tess environment. `user` is a string representing the user.
    
    The method first initializes an empty ArrayList named `accessPoints`. This list will be used to store the found access points.
    
    Next, the method calls the `queryFcpObject` method on a `tessClient` object, passing in several parameters including the API version, namespace, resource, environment, and user. This method returns a Map, with keys as Strings and values as Objects, which is stored in the `nsvcs` variable.
    
    Then, the method uses Java 8's Optional and Stream APIs to process the `nsvcs` map. It first checks if the `nsvcs` map is not null and contains a key named "items". If it does, it casts the value associated with the "items" key to a List of Maps, where the keys are Strings and the values are Objects. It then iterates over each item in this list.
    
    For each item, the method checks if it contains a key named "metadata". If it does, it retrieves the value associated with the "name" key from the metadata map and stores it in the `nsvc` variable. If the `nsvc` string contains "garlic.com", it is added to the `accessPoints` list.
    
    Finally, the method returns the `accessPoints` list. This list contains the access points found for the given pool ID, Tess environment, and user.
    

    9. Refinement

    Github copilot can not only refine code, but also support refine article paragraphs.

    There are many issues occurred during the migration process, which will block  
    the CPDLC migration. For these known issues, we summarize the following  
    pattern to help detect and solve problems in advance to make the whole process more efficient.
    

    Refine result:

    The migration process encounters numerous issues that can hinder the progress  
    of CPDLC migration. We've identified and summarized these known issues into  
    a pattern to facilitate early detection and resolution,  
    thereby enhancing the overall efficiency of the process.
    

    10. Question and Answer

    Enter in GitHub Copilot Chat or comments, you can get the answer from Copilot. For example, we can ask Copilot the following questions.

    Command to get the log information of the app container in the pod of the namespace tess-deev in k8s
    

    The answer generated by Copilot is as follows:

    kubectl logs -n <namespace> <pod_name> -c <container_name>
    
    e.g. kubectl logs -n tess-deev my-pod -c my-container
    

    11. Document Completion

    11.1 Generate a table of contents in markdown

    For example, the table of contents of this document is generated by Copilot. It is more convenient to generate the table of contents through Copilot.

    11.2 Text Prompt

    For example, the following is an overview of the readme of the Container PDLC migration tool. After I deleted it, copilot will automatically generate the overview content.

    Old overview:

    Automation flow currently supports to migrate C3 pools to container based workloads in automation way. The automation flow support to add manual phases and can continue to process if manual phase status changed to complete. 
    

    New overview:

    Automation Flow is a workflow engine to automate the migration process of ContainerPDLC. It is built on top of [Conductor](https://netflix.github.io/conductor/) and [Spring Boot](https://spring.io/projects/spring-boot). It provides a set of APIs to trigger the migration flow and check the status of the migration. It also provides a web portal to trigger the migration flow and check the status of the migration.
    

    The overview as above of the readme of the Container PDLC migration tool is generated by Copilot.

    Shortcut key

    Configure shortcut keys

    • Settings → Keymap → plugins → Github Copilot

    Default shortcut keys

    • Code Completion: opt + \
    • Pre Completion: opt + [
    • Next Completion: opt + ]
    • Show Top 10 Completions: opt + return

    相关文章

      网友评论

          本文标题:Copilot Experience

          本文链接:https://www.haomeiwen.com/subject/aoacndtx.html