| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | |
| 7 | 8 | 9 | 10 | 11 | 12 | 13 |
| 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| 28 | 29 | 30 | 31 |
Tags
- Classification
- 분류
- 회귀
- LG
- GPT-4
- AI
- 지도학습
- supervised learning
- 딥러닝
- Machine Learning
- deep learning
- OpenAI
- 티스토리챌린지
- 해커톤
- gpt
- LG Aimers 4th
- 오블완
- PCA
- ChatGPT
- LLM
- 머신러닝
- LG Aimers
- regression
Archives
- Today
- Total
목록Paper Reading (1)
SYDev
https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org Background Sequence Data sequence data는 순서가 있는 데이터를 의미한다. sequence 원소들은 특..
Paper Reading
2023. 11. 5. 20:39