CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-21 year ago...message-squaremessage-square41fedilinkarrow-up137arrow-down12
arrow-up135arrow-down1message-square...CoderSupreme@programming.dev to Asklemmy@lemmy.ml · edit-21 year agomessage-square41fedilink
minus-squareSir_Kevin@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up3·2 years agoThe quality and amount of training done from model to model can very substantially.
minus-squarevrighter@discuss.tchncs.delinkfedilinkarrow-up2arrow-down1·2 years agoproving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.
The quality and amount of training done from model to model can very substantially.
proving my point. the training set can be improved (until they’re irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today’s limitations.