arrow_upward
Opinions about NLP and GPT3 and BERT
#1
A few months ago I read an article that explained why it was worthless to make AI predict human emotions based on photographs. The reasons were mainly that a machines can't understand the context of what they are evaluating.

I think the same applies with all of the big NPL models. The way we use the internet to comunicate with people is always evolving and it's fully dependent on tha context of the conversations, so, in my opinion, machines will never understand human lenguage and relying on GPT3 or BERT is reckless.

But what do you think?



#2
Same thing, longer chains, it doesn’t solve the problems with the inherent models.

All they are doing is throwing more data at a problem, and hoping it will solve it, not actually developing a new model which solves the problem.



#3
I actually think it’s naive to say that “same thing, longer chain” to a more performant and well posed architecture for NLP tasks but I do agree that both the models don’t actually aim to better solve the problems they’re trying to solve and a model centric approach (as opposed to a data centric approach) is not the solution