I tried it, but nothing worked. Maybe it's just a skill issue for me :L
Been using it on Groq and for only being a 70b model, it does things only 100+b models could do months ago. It's actually really crazy. Sadly it only has 8k context, which means anything besides double checking a few phrases makes it pointless.
Though, I did a quick experiment.
Using the same system prompt I did a
Translation for:
「まさか勇者様、冗談なのです? あんな雑魚なんかこれからいくらでも戦うことになるのです。さっさと本気で消し去るのです」
Command R Plus:
“You can’t be serious, Your Highness the Hero. We’re going to be fighting that kind of weakling from now on. Get serious and wipe them out.”
GPT4 Turbo (Don't know why it kept Yuusha, which can mean brave and or Hero, untranslated?):
"Could it be, Yuusha-sama, that this is a joke? We'll be fighting plenty such small fries from now on. Just seriously wipe them out quickly."
And Llama 3:
"What, you're the brave hero, joking around? Those small fries will be a piece of cake to defeat from now on. Let's get serious and crush them already."
Now, looking at Llama3, it's still stilted compared to the other two much bigger and stronger models, but look what happens when I ask each to explain it
GPT4 explanation:
View attachment 3563254
Command R Plus explanation:
View attachment 3563256
Notice how those two didn't really explain much? Command R is close, but still doesn't really explain the whole thing.
Now look at what Llama3 does:
View attachment 3563262
Despite each using the same system prompt and given the same amount of context, Llama3, even though it gave a much more stilted translation, gives not just explanations, but a step by step guide of how it got the translation it did. And again, this is a much smaller model even compared to Command R plus. It makes me hopeful for the 400+b model Mark Zuckerberg mentioned META still had training (Hopefully with more context) Because at this rate, Mark Zuckerberg might actually be the unironic winner of the text generation AI race.