CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-210 个月前...message-squaremessage-square41fedilinkarrow-up137arrow-down12
arrow-up135arrow-down1message-square...CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-210 个月前message-square41fedilink
minus-squareSir_Kevin@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up3·1 年前The quality and amount of training done from model to model can very substantially.
minus-squarevrighter@discuss.tchncs.delinkfedilinkarrow-up2arrow-down1·1 年前proving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.
The quality and amount of training done from model to model can very substantially.
proving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.