Can AI fix itself so that it gets better at a task? I don’t see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.
That’s been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there’s just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won’t work and could also make other things worse.
Can AI fix itself so that it gets better at a task? I don’t see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.
It is worse. People can’t even fix AI so it gets better at a task.
That’s been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there’s just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won’t work and could also make other things worse.