AI

[LLM] Coursera - Genarative AI with LLM - transformer, hugging face, pre-trained promts

아나엘 2023. 9. 9. 17:05

https://github.com/google-research/FLAN/blob/main/flan/v2/templates.py

 

Exercise:

- Experiment with the prompt text and see how the inferences will be changed. Will the inferences change if you end the prompt with just empty string vs. Summary: ?

- Try to rephrase the beginning of the prompt text from Summarize the following conversation. to something different - and see how it will influence the generated output.

 

이걸 이용해서 zero-Shot inference를 하면 .. 더 좋다. 요 task에서는 summary 부분이 중요

 

근데 마찬가지로 promt이용+one shot(Means I'm giving it one complete example, including the correct answer as dictated by the human)을 하면 더 정확해짐. 만약 few shots -> few가 너무 커지면 파라미터 튜닝을 다시 하기.

 

 

  • Choosing max_new_tokens=10 will make the output text too short, so the dialogue summary will be cut.
  • Putting do_sample = True and changing the temperature value you get more flexibility in the output.
반응형

'AI' 카테고리의 다른 글

[Kaggle] sklearn Linear Regression function  (2) 2023.09.30
[LLM] PEFT, fine tuning, LLM evalatuation metric ( ROUGE, BLEU)  (0) 2023.09.18
RNN, BiLSTM  (0) 2023.02.14
Attention is All You Need  (0) 2023.02.14
Pytorch  (0) 2023.01.26