5 Comments
Oct 28, 2023Liked by Benjamin Marie

Hello

i have some questions please.

- do we have to make a specific prompt style to fine tune ( lora) a model ? I mean the ‘road’ used during inference will be more probable to use lora weights ?

- with peft we can add lora weight from extra weights, why don’t we do several time with different calibrate lora to have better results ?

Expand full comment
Oct 28, 2023Liked by Benjamin Marie

Finally mixe my two questions to have the best model so far....

Working on it .what do you think ?

Expand full comment
author

That makes sense. I'm sure some work already tried to apply lora several times and study the impact. I'll try to find it.

My assumption is that the last lora fine-tuning will override the previous one if done one after the other. Maybe a strategy simultaneously merging different lora into the base model would work but then I don't know what would happen during inference. That's an interesting question.

Expand full comment
Oct 28, 2023Liked by Benjamin Marie

What if we change prompt for every lora model ?

Or simply the initial token ...

Expand full comment
author

Yes it may work with a special tag inserted at the beginning of prompt to route to most relevant weights.

Expand full comment