Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -7,6 +7,72 @@ widget: | |
| 7 | 
             
            ---
         | 
| 8 |  | 
| 9 |  | 
|  | |
|  | |
|  | |
| 10 |  | 
| 11 |  | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 12 |  | 
|  | |
| 7 | 
             
            ---
         | 
| 8 |  | 
| 9 |  | 
| 10 | 
            +
            # CodeTrans model for program synthesis
         | 
| 11 | 
            +
            Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in
         | 
| 12 | 
            +
            [this repository](https://github.com/agemagician/CodeTrans). 
         | 
| 13 |  | 
| 14 |  | 
| 15 | 
            +
            ## Model description
         | 
| 16 | 
            +
             | 
| 17 | 
            +
            This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets.
         | 
| 18 | 
            +
             | 
| 19 | 
            +
            ## Intended uses & limitations
         | 
| 20 | 
            +
             | 
| 21 | 
            +
            The model could be used to generate lisp inspired DSL code given the human language description tasks.
         | 
| 22 | 
            +
             | 
| 23 | 
            +
            ### How to use
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            ```python
         | 
| 28 | 
            +
            from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
         | 
| 29 | 
            +
             | 
| 30 | 
            +
            pipeline = SummarizationPipeline(
         | 
| 31 | 
            +
                model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask"),
         | 
| 32 | 
            +
                tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask", skip_special_tokens=True),
         | 
| 33 | 
            +
                device=0
         | 
| 34 | 
            +
            )
         | 
| 35 | 
            +
             | 
| 36 | 
            +
            tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b"
         | 
| 37 | 
            +
            pipeline([tokenized_code])
         | 
| 38 | 
            +
            ```
         | 
| 39 | 
            +
            Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/large_model.ipynb).
         | 
| 40 | 
            +
            ## Training data
         | 
| 41 | 
            +
             | 
| 42 | 
            +
            The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
         | 
| 43 | 
            +
             | 
| 44 | 
            +
             | 
| 45 | 
            +
            ## Training procedure
         | 
| 46 | 
            +
             | 
| 47 | 
            +
            ### Multi-task Pretraining
         | 
| 48 | 
            +
             | 
| 49 | 
            +
            The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).
         | 
| 50 | 
            +
            It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
         | 
| 51 | 
            +
            The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
         | 
| 52 | 
            +
             | 
| 53 | 
            +
             | 
| 54 | 
            +
            ## Evaluation results
         | 
| 55 | 
            +
             | 
| 56 | 
            +
            For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
         | 
| 57 | 
            +
             | 
| 58 | 
            +
            Test results :
         | 
| 59 | 
            +
             | 
| 60 | 
            +
            |   Language / Model   |      LISP      |
         | 
| 61 | 
            +
            | -------------------- | :------------: |
         | 
| 62 | 
            +
            |   CodeTrans-ST-Small    |     89.43      |
         | 
| 63 | 
            +
            |   CodeTrans-ST-Base     |     89.65      |
         | 
| 64 | 
            +
            |   CodeTrans-TF-Small    |     90.30      |
         | 
| 65 | 
            +
            |   CodeTrans-TF-Base     |     90.24      |
         | 
| 66 | 
            +
            |   CodeTrans-TF-Large    |     90.21      |
         | 
| 67 | 
            +
            |   CodeTrans-MT-Small    |     82.88      |
         | 
| 68 | 
            +
            |   CodeTrans-MT-Base     |     86.99      |
         | 
| 69 | 
            +
            |   CodeTrans-MT-Large    |     90.27      |
         | 
| 70 | 
            +
            |   CodeTrans-MT-TF-Small |   **90.31**    |
         | 
| 71 | 
            +
            |   CodeTrans-MT-TF-Base  |     90.30      |
         | 
| 72 | 
            +
            |   CodeTrans-MT-TF-Large |     90.17      |
         | 
| 73 | 
            +
            |   State of the art   |     85.80      |
         | 
| 74 | 
            +
             | 
| 75 | 
            +
             | 
| 76 | 
            +
             | 
| 77 | 
            +
            > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
         | 
| 78 |  | 

