- Jul 24, 2017
- 2,103
- 19,842
The original.I do that in the original file or in the translated one?
The original.I do that in the original file or in the translated one?
Not sure I understand. The estimate is based on dollars and is calculated by going through the files and counting all the tokens using OpenAI's tokenizer. I have to guess what the output is going to be so its not 100% accurate, but generally the estimate is always around 20% higher than what the actual cost will be.
The 0.002 is the cents per token. It might be out of date I don't use 3.5 anymore.View attachment 3608110
What, exactly, does 0.002 represent? I find that when I do translations using the latest model, GPT-3.5-turbo-0125, my prices are enormously lower than the estimate - around 10% of the given estimate. I notice that you use a different model spec in your default ENV config - that may cost more than the latest one. The newer models are better and cheaper. Anyway, the OpenAI pricing page lists prices in dollars per million tokens - I'd like to update the pricing estimate to be closer to the actual value I'm using.
View attachment 3608112
Those two spikes represent two full passes of the game I was working on translating, plus additional passes. The cost estimate was $11 per pass, but as you can see, the actual cost was enormously lower.
Are you actually just using the old, expensive model? You could be spending 90% less money.
Consider trying 3.5-turbo-0125. It's pretty good.The 0.002 is the cents per token. It might be out of date I don't use 3.5 anymore.
I'm using GPT-4-Turbo which is 0.01 and 0.03 per token. Since the estimate doesn't account for requests that fail, it multiplies the entire cost of each request by 2. That's why the estimate is always higher than the actual cost.
The estimate is essentially a worst case scenario cost analysis. Basically if you are going to invest in a game, it tells you the max it can potentially cost. But often the actual costs are lower because not every requests is going to fail obviously. The whole purpose is to make sure you know exactly how much you are going to likely spend.
Not good enough for my purposes.Consider trying 3.5-turbo-0125. It's pretty good.
GAMEUPDATE.bat
for any Linux users. Works for me on Ubuntu. Save to GAMEUPDATE.sh
, give execution rights with sudo chmod +x GAMEUPDATE.sh
, and run with ./GAMEUPDATE.sh
.#!/bin/bash
# Check if patch-config.txt exists
if [ ! -f ./patch-config.txt ]; then
echo "Config file (patch-config.txt) not found! Assuming no patching needed."
exit 0
fi
# Read configuration from file
source ./patch-config.txt
USERNAME=$(echo $username | tr -d '\r')
REPO=$(echo $repo | tr -d '\r')
BRANCH=$(echo $branch | tr -d '\r')
# Get the latest hash
echo "Getting latest commit SHA hash."
LATEST_PATCH_SHA=$(curl -s "https://api.github.com/repos/${USERNAME}/${REPO}/branches/${BRANCH}" | sha256sum | tr -d "[:space:]-")
# Compare with previous hash
if [ -f previous_patch_sha.txt ]; then
PREVIOUS_PATCH_SHA=$(head -n 1 previous_patch_sha.txt | tr -d '\r')
if [ "$LATEST_PATCH_SHA" = "$PREVIOUS_PATCH_SHA" ]; then
echo "Patch is up to date."
exit 0
else
echo "Update found! Patching..."
fi
else
echo "Previous SHA hash not found!"
echo "Assuming first time patching..."
fi
# Download zip file
echo "Downloading latest patch..."
curl -s https://codeload.github.com/$USERNAME/$REPO/zip/refs/heads/$BRANCH -o repo.zip
# Extract contents
echo "Extracting..."
rm -fr $REPO-$BRANCH
unzip -qq repo.zip
# Apply patch
echo "Applying patch..."
cp -r $REPO-$BRANCH/* ./
# Clean up
echo "Cleaning up..."
rm -f repo.zip
rm -fr $REPO-$BRANCH
# Store latest SHA for next check
echo -n $LATEST_PATCH_SHA > previous_patch_sha.txt
It's definitely possible. Such solutions already exist; for example, there's a module called Sakura for BallonsTranslator. I haven't used it myself, but if I understand correctly, the module allows for the use of the local model Sakura-13B-Galgame for translating from Japanese to Chinese.Is it possible to use a huggingface model instead of openai's chatgpt? Since those can be free, locally hosted, and unrestricted
Dont think ive used it on Waffle before. Is this using the csv module?Anyone here tried translating waffle games using this? It's missed about 10% of the text I'm trying to work on which is quite a lot in this case, probably a little over 1000 line misses if we convert that percent to linesYou must be registered to see the links. Also yes if you click on the link, it has a translator. But he has been radio silent for two years on everything so I'm just assuming his translation for this is dead. Along with the fact lots of people struggle to hook this with textractor. Since even with the right hook code it doesn't work, you need to change a few file names for it to hook right.
No it's using the anim module. I'm new to this so I'm just kinda guessing that the .csv wouldn't work since the unpacking tool I use. Unpacks the scr.pak into a json file.Dont think ive used it on Waffle before. Is this using the csv module?
Maybe theres something special about the lines that its missing? That module is built for anim after all so might not be a perfect fit.No it's using the anim model. I'm new to this so I'm just kinda guessing that the .csv wouldn't work since the unpacking tool I use. Unpacks the scr.pak into a json file.
Found tool in this thread: https://f95zone.to/threads/translation-request-maki-chan-to-now-waffle.99167/
Link to the actual tool:You must be registered to see the links
I also tried loading with the other .json modules, but none of them would even load except for anim so I kinda just went with it.
Ah shit right. All right I'll go take a look at an anim game then and compare. As for now though I've tried tweaking my prompt and doing a second pass. I'll keep you posted.Maybe theres something special about the lines that its missing? That module is built for anim after all so might not be a perfect fit.
What might be happening is since you are using 3.5, the ai isnt smart enough and there are more mismatches happening.Alright so I looked through the json TL in: https://f95zone.to/threads/wifes-pussy-transformed-while-im-away-final-anim-teammm.177100/ to compare anim and waffle.
And the line structure seems to be the same namely: "japanese test": "english text",
Though I've done four passes so far and something is setting my alarm off just a little bit. If I do a search for "" texts so when the translation doesn't happen. They decrease at a very linear rate like this:
First pass empty translations: 1865
Second pass empty translations: 1580
Third pass empty translations: 1291
fourth pass: 1060
First to second pass translations completed: 285
Second pass to third pass translations completed: 289
Third pass to second pass translations completed: 231
There's a dip at third pass but first two were right next to each other. So there's probably some setting/prompt adjustment since the issue is fairly consistent. If I ever decide to do another waffle game, maybe I'll look into it. But for now I'll just brute force it I guess since I'm only using gpt3.5 api cost is low and if I want to publish it I'll take the month or two to edit it.