Update README_EN.md
Browse files- README_EN.md +3 -0
README_EN.md
CHANGED
@@ -1,7 +1,10 @@
|
|
|
|
|
|
1 |
# **openPangu-Embedded-1B**
|
2 |
|
3 |
[中文](README.md) | English
|
4 |
|
|
|
5 |
## 1. Introduction
|
6 |
|
7 |
The openPangu-Embedded-1B is an efficient language model trained from scratch based on the Ascend NPU, with 1B parameters (excluding vocabulary embedding). The model employs a 26-layer dense architecture and was trained on approximately 10T tokens. Through model architecture design for Ascend Atlas 200I A2, optimized data and training strategies, openPangu-Embedded-1B achieves high precision while meeting the requirements for edge-side deployment.
|
|
|
1 |
+
GPU version of https://ai.gitcode.com/ascend-tribe/openpangu-embedded-1b-model/tree/main
|
2 |
+
|
3 |
# **openPangu-Embedded-1B**
|
4 |
|
5 |
[中文](README.md) | English
|
6 |
|
7 |
+
|
8 |
## 1. Introduction
|
9 |
|
10 |
The openPangu-Embedded-1B is an efficient language model trained from scratch based on the Ascend NPU, with 1B parameters (excluding vocabulary embedding). The model employs a 26-layer dense architecture and was trained on approximately 10T tokens. Through model architecture design for Ascend Atlas 200I A2, optimized data and training strategies, openPangu-Embedded-1B achieves high precision while meeting the requirements for edge-side deployment.
|