Needs stricter filtering and/or a manual quality check pass

#4
by koute - opened

This dataset's pretty big, which in nice! But unfortunately as I look through it its quality is very hit-and-miss.

First, there's a bunch of completely misaligned entries. For example, see "Great Detective of Another World", "I Came Back but the World is Still a Fantasy!?" or "Hazure Skill ‘Mapping' wo Te ni Shita Ore wa, Saikyou Party to Tomo ni Dungeon ni Idomu". First few lines from "Great Detective of Another World":

翌日、俺達が案内されたのは屋敷から馬車間程度の場所にある、石造りの古城だった。
This kind of quantity and power weapon appears in the hands of empty cloth members, which is quite different.

トレジャー家の領地の端にあるこの古城のことは有名だった。地下に潜るタイプじゃないダンジョンは珍しいからだ。
What's more, as the president of stark group, Tony didn't know anything about it.

そう、実際の話、それは遠目からは森の中にある古びた城にしか見えない。だが、馬車が近づくにつれて少しずつその城の奇妙な点が目につくようになった。
He thought of his early return to the base, but he was ambushed.

古びているのにその城はどこにも欠けず風化せずひびすら入っておらず、そして何よりも正面についている扉以外の出入り口がない。
Tony's heart blazed with fury.

Completely doesn't match.

Then, there are a bunch of low quality (most likely machine translations) in the mix. One random gem here is from "Hachinan tte, Sore wa Nai Deshou! (WN)", which is both badly aligned and machine translated; here's a snippet (I found it by randomly grepping for "temee"):

魔族のテメエらの方が年上だろうが!               
Sweet sweet, you're not doing great with that level of learning!
                                                 
このまま氷のオブジェにしてやるぜ!               
Kusogaki!                                        
                                                 
やはり、魔法の訓練ゴッコしかしていないので長時間魔法の精度が保てないようだね。
Treat people as aunts!                           
                                                 
最初は互角だったけど、徐々に私の冷気に浸透されて足元から凍っていく。
The demonic Temee and others will be older!      
                                                   
「泣き言かい?甘いね、坊ちゃん!」
I will make it an ice object as it is!

「助けて」
After all, it seems that you can't maintain the accuracy of magic for a long time, because you are only doing magic training.

「残念!もう間に合わないね」
At first it was even, but it gradually penetrated my cold and started to freeze from my feet.

「綺麗なお姉さん!助けて!」
"Are you whining? Sweet, Bochan!"

「そんなお世辞が通用するか!旦那様以外の男性に綺麗だって褒められても嬉しくねえんだよ!助けてほしいか?」
"help"

So this is shifted by three lines; if we manually fix it we get this:

魔族のテメエらの方が年上だろうが!
The demonic Temee and others will be older!      
                                                 
このまま氷のオブジェにしてやるぜ!
I will make it an ice object as it is!                           
                                                 
やはり、魔法の訓練ゴッコしかしていないので長時間魔法の精度が保てないようだね。
After all, it seems that you can't maintain the accuracy of magic for a long time, because you are only doing magic training.
                                                 
最初は互角だったけど、徐々に私の冷気に浸透されて足元から凍っていく。
At first it was even, but it gradually penetrated my cold and started to freeze from my feet.
                                                   
「泣き言かい?甘いね、坊ちゃん!」
"Are you whining? Sweet, Bochan!"

「助けて」
"help"

...which now matches, but it's obvious that it's machine translated just by looking at the English translations.

So, all in all, this should be realigned from scratch, more aggressively filtered, and ideally manually checked (at least the first chapter of each series) to make sure the entries are not machine translations.

I've made a lot of fixes to the alignment code locally, but haven't run it yet, since it takes multiple days on my hardware, so I want to catch all the issues in one go.

Automated weeding out of poor translations is difficult, ranging from detecting machine-generated text to general Quality Estimation using COMET. Each additional process takes extra time though, so it's a tradeoff. Since I can't read Japanese, it's hard for me to do it manually beyond very simple checks that could be performed programmatically anyways.

I'll keep working on it. I'm planning to publish the code I use for alignment to GitHub, so hopefully other people can improve on it.

From experience I can tell you that COMET works well for essentially three things: 1) judging the general quality of datasets relative to each other, 2) finding misalignments, and 3) finding really badly translated sentences. Unfortunately it's not a panacea; it often produces false negatives (it can give you a low score for a sentence that's actually translated well, just because the sentence's style is not what COMET was trained on), and it cannot really detect machine translations very well (unless it's egregious).

So the misalignments you can probably easily eliminate in a fully automated matter, but I'm not really sure how you'd detect whether something's machine translated (especially if it's a subtle, edited MTL). But I guess for a lot of MTLs it's obvious that they're MTL'd just by looking at the English text? So even if you don't know any Japanese you could probably filter some of these out just by e.g. looking at the first few lines of the first chapter of each series and see if the English's fishy. Also, perhaps you could analyze how often a given translation group releases chapters, and if it's too often then obviously they must be using MTL to do it. Just some ideas.

I think part of the problem with COMET is it's not open to rephrasing that maintains semantic similarity. It might make sense to use it in conjunction with a multilingual NLI model. Still experimenting.

I don't necessarily want to filter out all machine generated texts, as some well-done ones can have equal quality to a casual human translation. I'd mostly be targeting unmodified MTLs, which shouldn't be too hard to detect automatically.

Another approach I use is performing NER and coreference resolution on reference and hypothesis texts, to disambiguate them for sentence-level evaluation. I couldn't find good multilingual coreference resolution models, sadly.

It's possible that this approach could be tweaked, if a good multilingual model is used, or even two monolingual ones, and the results could be compared for general similarity in coreference positioning or other methods. Still not sure how well that would work...

I don't think I'll need to use COMET to directly detect misalignment, likely I could get by with a Google translate + bleu pass and some simple logic. Either that or I can root around through my alignment code, which definitely has some bugs. Once I got it working, I was hesitant to prod at it and mess it up. Might have to take another look or swap out LaBSE for something like VMSST, which the authors claim can handle semantic similarity.

Well, actually, I agree that well done MTL-assisted translations are a lot better than even the majority of professional non-MTL translations! But the problem here is that they have to be done well - ideally using a state-of-art translation model, and they have to be edited by an actual translator who's looking at the original untranslated sentence and is fixing the translations based on that (as opposed to someone who can only speak English and is only fixing the grammar on the English-side). Unfortunately there aren't too many of those, especially among groups who are just translating a bunch of web novels for free. But yeah, at the very least the priority should be to get rid of the truly bottom tier ones.

So you're currently using LaBSE similarities for the alignments? I've read some papers about it, but I've never experimented with it myself. Did you make a custom aligner using it? Based on the papers I've seen I think it's unlikely LaBSE would fail so badly on the misaligned texts you have in the dataset, so yeah, there's probably a bug or two hiding there.

Have you also considered an MTL-based alignment method like e.g. Bleualign? That'd also be easier to manually quality-check even if you can't read Japanese.

Owner

My aligner is loosely based on this sentence-transformers example: https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications/parallel-sentence-mining

I considered using Bleualign, but was worried that semantically similar but rephrased sentences wouldn't be matched, which could theoretically stifle creative translations in the dataset. Not sure how valid that argument is, so I'll likely have to try it and see.

Owner

Well, actually, I agree that well done MTL-assisted translations are a lot better than even the majority of professional non-MTL translations! But the problem here is that they have to be done well - ideally using a state-of-art translation model, and they have to be edited by an actual translator who's looking at the original untranslated sentence and is fixing the translations based on that (as opposed to someone who can only speak English and is only fixing the grammar on the English-side). Unfortunately there aren't too many of those, especially among groups who are just translating a bunch of web novels for free. But yeah, at the very least the priority should be to get rid of the truly bottom tier ones.

So you're currently using LaBSE similarities for the alignments? I've read some papers about it, but I've never experimented with it myself. Did you make a custom aligner using it? Based on the papers I've seen I think it's unlikely LaBSE would fail so badly on the misaligned texts you have in the dataset, so yeah, there's probably a bug or two hiding there.

Have you also considered an MTL-based alignment method like e.g. Bleualign? That'd also be easier to manually quality-check even if you can't read Japanese.

When you were looking through the dataset, how common were the failures? Did you see lots of small persistent errors, or more along the lines of a bad alignment sprinkled among good ones?
I'll take a look at the metadata for the examples you provided to get an idea of what's going wrong.

Well, it's kind of all over the place, but then I haven't exhaustively looked at everything of course. Let me give you some more examples.

Note: I'm mostly eyeballing this, and all numbers are counted from 0 as they appear in the file for a given series, so "000" means first document for the given series.

Bottom tier

These are completely broken and are mostly misaligned (although here and there some lines might match).

  • The Wolf Lord's Lady, #005
  • The Tanaka Family Reincarnates, #000
  • The Pseudo-Kunoichi from Another World, #026
  • My Twin Sister Was Taken as a Miko and I Was Thrown Away but I'm Probably the Miko, #179
  • Entering a Company From Another World!?, #054
  • Clearing an Isekai with the Zero-Believers Goddess – The Weakest Mage among the Classmates (WN), #224
  • Yuri Empire, #036

Some misalignments

These have misalignments sprinkled in, but are not completely derailed.

  • Nigoru Hitomi de Nani wo Negau, #107
  • Level Eater, #045
  • The Magical Revolution of the Reincarnated Princess and the Genius Young Lady, #002

It seems that might be quite a lot of these, actually. Especially for lower quality translations and/or heavily edited ones.

Have extra stuff

  • Metro Labyrinth, #094

Example:

「......なにこれ......」
"...what is this...?" (Shuu)

Bad MTL-tier

Decently aligned, but the translations themselves are sloppy machine translations which I'm 100% sure weren't properly quality checked. (On the very first page I can already see egregious mistakes which should be obvious to anyone who can read Japanese.) Should be just completely deleted from the dataset in my opinion.

  • The Pseudo-Kunoichi from Another World, #003
  • Skeleton Knight, in Another World, #152
  • My Status as an Assassin Obviously Exceeds the Hero's' #166

As an example here's one of the MTL mistakes from the first one:

ん、トップバッターはヴァト君ね。
Mm, the top batter is you, Vato.

The proper translation for this should be "Mm, the top batter is Vato-kun." The "君" can be read both as "kimi" (you) and "kun" (the "-kun" name suffix). This is a mistake that MTL systems make all the time, while no human translator will. (And also why you don't want these kind of machine translations in your dataset; they just end up poisoning any model you train on them with these kinds of mistakes.)

Another example from the second one:

しかしそんな様子を見守っていた小さな女の子の方は、返事をしない女性の容態を心配して縋り付くように名前を呼んで涙を流していた。
But if you were a little girl watching for that, you were crying in tears calling your name to annoy you worried about the condition of a woman who wouldn't respond.

This one's a little tricky to translate directly (so I'm not surprised MTL is struggling here), but a proper translation would go something like this: "However, the little girl who was watching over the whole situation was calling out the name of the unresponsive woman, as if clinging to her, crying."

So as you can see, the MTL is essentially garbage.

Better MTLs

These are also 100% edited MTLs, but are higher quality. Which doesn't necessarily mean they're perfect and still contain mistakes.

  • Reincarnation into the Barrier Master, #212

Example:

「でも......しばらくは、こうして、一緒に寝てくださいませ......。さもないと、眠れない気がするのですわ」
"But......please don't sleep with me like this for a while........Otherwise, I don't think I'll be able to sleep."

Proper translation: "But.... just for a little bit, like this, please sleep with me.... Otherwise, I don't think I'll be able to sleep."

Another example from the same file:

「いいや、メイ。<<snip for clarity, since the line is long>>
"No, May. No, Mei... <<snip for clarity>>

A leftover from the MTL editing. As you can see the MTL translated "いいや、メイ" into "No, May", which then they edited into the correct "No, Mei", but accidentally forgot to delete what the MTL originally spewed out.

Good (?) TLs

Possibly MTLs, but they're pretty decent so it's hard to tell (and I haven't read through top-to-bottom, just took a quick look). I have still seen some mistakes, but the quality seems to be better than your average commercial TL. (And by "quality" I mean that the meaning and the structure of the original sentences is carried over very well while still being good English.) These are the kind of TLs you want in the dataset.

  • Magical Girl Tyrant Sylph, #008
  • The Idol Girl in My Class Is Acting Suspiciously, #055

Thanks so much for all the details!

I didn't do any quality filtering on the original dataset, instead relying on the top novels from novelupdates to be good quality anyways.
I'll take another crack at reprocessing this dataset in the next week or two after I finish testing a CPO tune of the model.

I was considering the viability of fine-tuning bge-m3 or similar to score full-document translation quality prior to alignment, or maybe post. Not super familiar with those types of models, so I'll see how things go. Likely will end up worse than COMET, so it's towards the bottom of the TODO list.

Hmm, looking at the badly aligned examples' metadata (when I do the update, I'll include the chapter numbers too, seems like a no-brainer):
The Wolf Lord's Lady 5: {"missed_lines": 0, "inserted_lines_src": 11, "inserted_lines_trg": 110}
The Tanaka Family Reincarnates 0: {"missed_lines": 4, "inserted_lines_src": 76, "inserted_lines_trg": 6}
The Pseudo-Kunoichi from Another World 26: {"missed_lines": 1, "inserted_lines_src": 2, "inserted_lines_trg": 1}
My Twin Sister Was Taken as a Miko and I Was Thrown Away but I'm Probably the Miko 179: {"missed_lines": 1, "inserted_lines_src": 0, "inserted_lines_trg": 1}
Entering a Company From Another World!? 54: {"missed_lines": 7, "inserted_lines_src": 4, "inserted_lines_trg": 3}
Clearing an Isekai with the Zero-Believers Goddess – The Weakest Mage among the Classmates 224: {"missed_lines": 14, "inserted_lines_src": 30, "inserted_lines_trg": 10}
Yuri Empire 36: {"missed_lines": 0, "inserted_lines_src": 13, "inserted_lines_trg": 1}

Not as much of a pattern as I'd have liked.

Owner

@koute If you want to take a stab at the sentence alignment yourself, I can upload the raw aligned documents as a dataset.

Rewrote my alignment code, it's got much better performance (tested it on a few of your examples, it aced them with both LaBSE and VMSST as model). Currently creating a new pre-training dataset to act as a base for both the translation model and the eval metric, which is required to filter poor translations out of the parallel dataset.

Owner

@koute I updated the dataset with new alignment but no quality filtering. It should hopefully fix the worst of the issues you found.

Would be curious to hear how you're handling the translation eval, that's something I'm currently struggling with.
(Current method I'm using is to ask a 70B-llm to judge the sentence, and it "works" but is not amazing and I feel like there's better ways.)

Would be curious to hear how you're handling the translation eval, that's something I'm currently struggling with.
(Current method I'm using is to ask a 70B-llm to judge the sentence, and it "works" but is not amazing and I feel like there's better ways.)

My current method is reference-based. I first use a named entity recognition model to get the names from the text, then use chrf to replace similar names in the machine translation with the ones from the reference. Then I perform coreference resolution on the texts, and replace all context-based mentions of things with the primary name (so all pronouns get replaced with the character name, and so on).
Finally, I take the resulting texts and use an NLI model to compare them sentence by sentence, which checks to see of the two texts are in agreement, or have the same semantic naming. Then I average the sentence scores to get the final one.

This method is the only one I've found that can reliably detect poor translations. It works on the idea that bad translations change the meaning of the text, and good ones do not. Since all the NLI models I found only work on sentences, I have to disambiguate the texts first so that there's no question as to what is being referenced in a sentence. Skipping the coreference stage ruins the eval accuracy.

The only problem with this method is that it requires a good reference. So, I'm trying to train a new model on the eval scores of various machine translators with the source being provided instead of the reference, so it can learn to bypass the need for a reference.

That's a very interesting way to handle it!
Have you uploaded this code to a github anywhere, would love to see it.

Owner

That's a very interesting way to handle it!
Have you uploaded this code to a github anywhere, would love to see it.

The code is a bit of a mess, I'll clean it up and upload it later today, will send the link then.

Great, I'll look forward to that! If it's a little messy that's okay, research projects tend to be kind of a mess in general.

I've had success making a reference-free metric. I've made one based on stabilityai/japanese-stablelm-3b-4e1t-base that takes 512 tokens of a source and target text and and rates the translation quality as 'Good', 'Decent' or 'Bad'. So far it seems to be able to consistently pick up on poor translations, especially ones with incorrect pronoun assignment (which is pervasive in Japanese-English MT). The real issue is that it needs more training data containing high-quality references for the final version, which is going to take me a while to filter out of the dataset.

@NilanE
Thank you for sharing the idea of using an LLM as a reference with the source/target text.
This was very helpful. (I'm running behind on my own work so don't have time to say more, but since you were kind enough to share I wanted to thank you!)

Sign up or log in to comment