일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
Tags
- 분류
- 지도학습
- 티스토리챌린지
- 딥러닝
- Machine Learning
- ChatGPT
- 머신러닝
- OpenAI
- LLM
- supervised learning
- LG Aimers
- regression
- 오블완
- gpt
- LG Aimers 4th
- deep learning
- GPT-4
- 해커톤
- 회귀
- Classification
- AI
- LG
- PCA
Archives
- Today
- Total
목록Paper Reading (1)
SYDev

https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org Background Sequence Data sequence data는 순서가 있는 데이터를 의미한다. sequence 원소들은 특..
Paper Reading
2023. 11. 5. 20:39