Optimising mods for size

Post Reply
User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Optimising mods for size

by Blockhead » Post

This post was originally a response to this post. I think it was too much of a wall of text for that thread. Instead I think it deserved its own thread, to encourage more discussion.

I must say I am not an absolute master at this, but I think I have learned a fair few things that are worth sharing. Please chime in if you have something to add. I must also thank everyone who has responded so far, I have been using your input to revise and improve this Original Post (OP).

What is size?
Storage medium (plural: media) := A device for permanently storing bytes. Examples include hard disk drives and solid state drives, but also older media like optical media, floppy disks and so on.

Filesystem := The extra bits and bytes your computer writes onto your storage medium to keep track of where all the files are and what their names are. This obviously takes some space on its own, hence the standard disclaimer on storage media of 'available size may be less due to formatting'.

Apparent size := How many bytes actually make up the content of a file. This is usually the main figure a file browser application reports.

Block size := the minimum allocation size for parts of a file in a filesystem. Most desktop media with a typical filesystem allocate storage in blocks of 4096 bytes (4K). If a file is any smaller than 4K, it still takes up 4K in storage.

Disk usage := How much space all the files take up in your storage (solid state, spinning disk), which is always a multiple of block size.

Archive file := A file that packs multiple files together, often with metadata such as file permissions, and is often compressed. Even when uncompressed, archive files can often pack many small files together and result in lower disk usage than when all the files are sitting directly on the file system.

Git repository size := How much disk usage a git repository of the mod occupies (other version control systems are available).

If you are using version control like git, that has to store the information to reconstruct every version of your mod. Particularly bad are non-text files like models, textures and sounds, because they aren't ever recorded as deltas (a file that but instead an entirely new version of the file. Your Lua files only need the differences between versions recorded, which makes new versions take much less space. The same goes for your README, LICENSE, settingtypes.conf, mod.conf and other text files.

Git host := Sites that hold git repositories online, enable browsing via a web browser, and may include other features for collaboration, continuous integration et cetera. Examples include GitHub, GitLab, NotABug, CodeBerg, sourcehut. Self-hosted options include GitLab self-hosting, Gitea, Gogs, sourcehut self-hosting and cgit.

Downloaded mod size := The disk usage for a copy of a mod downloaded without any version control files, such as you would get from ContentDB or a git host.

Media size := The size of the files that a client must download when connecting to a server. It is the sum of the disk usage of all the sent media.

Transfer size := The size of the representation of a media file as it is received by the client. Is usually the apparent size of the file, although it differs from the apparent size of that file if it is sent compressed over the network. There is also a small size overhead for UDP/IP.

Minetest sends media from the following directories of a mod: textures, sounds, media, models, locale. We will discuss most of these separately. Media size is smaller than downloaded mod size, though usually not by much unless the mod is almost entirely code with little to no assets.

Lossless compression := A file compression technique that does not result in any loss of information, detail, resolution, and so on. In most cases, the compressed file can never be as small as a lossy compressed one.

Lossy compression := A file compression technique that can reduce the size of a file by sacrificing the ability to perfectly reconstruct it later on. The stored output is only an approximation, and in many cases when a file is lossy compressed more than once, it loses quality every time.

Please, if you are authoring media files, always keep uncompressed or lossless compressed files as the original, and re-export them to lossy compressed formats only as the last step once you are done revising the file.

Measuring Media size
It's a lot trickier to measure the actual media download size than the size of the mod on your hard drive.

But all of the old versions in git are irrelevant to the actual download size, as are the Lua files, your mod.conf, depends.txt and anything else that isn't in a textures/, models/, media/, sounds/ and locale/ directories of your mod. So you will probably get a good approximation by selecting just those directories and counting the total file size from those (and try to avoid rounding while you do that especially if it's a modpack).

Another method you could use is to measure it empirically. Run a portable Minetest installation as a client, which is the default for Windows for instance. You would run a server with Minetest Game (or some other game) and no mods, and connect a client which has Minetest Game (or the same other game) installed to it, and then measure the size of the minetest/media/cache directory. This gives you a baseline size for connecting to the server. Then you can add to the server any dependencies that your mod has, and measure the size of the cache directory again, just to establish a baseline that excludes your mod. If your mod works without dependencies, then you can skip that step. Finally, you would install your own mod into the server and measure the cache directory one more time. Then you should be able to subtract the first and second baseline numbers to tell you how much data (1) Minetest Game/your game without any mods takes (2) your mod's dependencies takes and (3) just your mod takes. Don't install any mods to the client, don't launch it in singleplayer and don't open the Content tab, as those may add files to the cache that you don't want.

Once you have a good idea of what the actual downloaded size of your mod is, you can figure out if you want to try to decrease its size. You will do this by running different kinds of disk usage optimisation on your media files.

Disk Usage Optimisation

Textures

What's there to gain or lose?
  • You will benefit most from optimising your textures if they have fewer colours, flat areas of colour or for photorealistic textures if you save as JPEG.
  • You can lose too much detail from textures if you try too hard to compress them. You have to decide what your acceptable level of detail is.
  • Textures will usually be easier to optimise than models.
The main texture format in Minetest is PNG. There is also support for JPEG, BMP and TGA. That last one got particularly controversial at one point.

My advice? Don't use BMP at all, and don't use TGA unless it's for a tiny, tiny file - even though a lot of files are going to be smaller than 4KiB. While BMP and TGA do support basic RLE compression when created by some programs, PNG's compression is going to be better in most cases. There's also no guarantee Minetest will handle the combination of palettes and RLE scheme used correctly.

TGA has had a complicated history with Minetest, however, in my opinion there are not many compelling reasons to use it. mcl_maps, part of Mineclone2/5, was using TGA for these Mineclone5 maps is because TGA is an easier format to encode from Lua than PNG. The Minetest devs had almost forgotten/never knew anyone was using TGA, and ripped it out to reduce how much code they had to manage. Well, the easiest way to find out if somebody uses some feature is to remove it and see if any complaints come in. Long story short, a PNG encoder was written in C++ and added to Minetest so that the excuse of "It's easier to encode" for TGA no longer rang true, but TGA support was also added back. For more information on this complicated history, where nobody is obviously right or wrong, see erlehmann's post in this thread.

The small set of use cases for TGA is for dynamically generated media where the pixel dimensions are quite small, less than 32 pixels in almost all cases (there are always complicated exceptions). xmaps uses TGA effectively because it has a very small palette and pixel dimensions. TGA is also encoded directly into the item metadata of those maps, so every byte counts, and it works out smaller than PNG, especially minetest.encode_png. Remember: Reducing apparent size will reduce transfer size, but it will not reduce disk usage. This is why the smaller apparent size of TGA is usually moot, and the standard choice of PNG is almost always best. That is why I can only recommend TGA for dynamic media of low resolution; if that's not your use case, disregard the format.

For PNG textures, optipng, oxipng and similar programs are commonly used to reduce their size. pngcrush is an older program than those first two and is obsolete; don't use it. For maximum compression with optipng, you should also strip the metadata. An example invocation of optipng is optipng -O7 -strip all $FILENAME, which optimises as hard as possible but takes longer to run, and will remove all metadata. Removing metadata is at your discretion: maybe you want to keep the authorship information in the file. You may find the odd file with a large amount of metadata though.

For something more extreme than PNG optimiser programs, you can try opening the file in GIMP, and converting to indexed colour with a small palette size. A palette size of 8-32 colours is what I consider normal, depending on texture complexity; of course if your texture only has 2 colours use a 2-colour palette. Be aware that dithering will reduce the gains you can make from converting to indexed colour, because it will be harder to compress a file with dithering; you should probably disable dithering. Finally, export with minimal metadata (untick most/all of the export option boxes), or run the output through optipng to remove all metadata.

You can also put any images that share an indexed colour palette into the same file and use Minetest's tilesheet functionality. This will save disk usage if the apparent size of any file would be less than 4K. If you are authoring a game or extensive mod and care about file size a lot, strongly consider designing your textures around a single colour palette of your own or someone else's creation, and placing all of your node textures into a single tilesheet file. You will probably want to make an exception for mesh nodes and entities, and not include those in your tilesheet. Meshes require UV mapping, which will make all your models entangled together and cause a massive headache with the amount of complexity that adds. As an example of the power of tilesheets, Zughy's Soothing 32 as a texture pack for Minetest Game could save a lot more disk space if all the Minetest Game textures were on one tilesheet (proof of concept pending).

JPEG would be suitable in case you are using high resolution textures and want to save space - you can tune the JPEG quality according to how you want your space/quality tradeoff to go. If you want to go beyond simply reducing the quality, you can enable chroma subsampling. This reduces the resolution of the colour information without reducing the resolution of the brightness information. In GIMP when exporting a JPEG, under advanced options you can select subsampling 4:2:0 (chroma quartered). As always test the result to make sure it's acceptable. But the file size can be affected quite drastically.

Sounds
What's there to gain or lose?
  • As you decrease sound quality, you may not enjoy the music enough.
  • A lot of sound effects don't need a lot of detail, so you may be able to downsample them without a noticeable loss of quality.
Minetest only supports Ogg Vorbis (.ogg) format for sounds. These may end up being a lot larger than your master files if you are authoring with something that uses MIDI, or an old-school tracker program. If your source files are low-bitrate WAVs you may also struggle to represent them with adequate fidelity as OGG and the file size will blow up.

The space/quality tradeoff is similar to JPEG because OGG is also a lossy compression algorithm. Since we are dealing with audio, your options are not quite as simple as JPEG. Audio can reduce its target bitrate (or quality setting), which will reduce overall fidelity by running the lossy compression harder. Audio can also reduce the bit depth, which is the amount of bits in a single sample, which is comparable to bits per pixel or bits per channel in image formats. Finally the setting that can't be compared to image formats is the sample rate, given usually in kHz, which is how many samples are played each second. You can read more about these concepts online.

As a start, you might try re-encoding with lower quality from a .ogg exporter like Audacity. If you need more control ffmpeg will help, but be prepared for a steep learning curve with ffmpeg.

My advice, though am no expert in audio, is typically to reduce bitrate/quality if you want to reduce the size - this is usually the primary way to reduce size, and it's only one variable to tune your quality/space tradeoff on. Reduce bit depth and sample rate if you want to go further into tweaking. Reduced bit depth may work for a chiptune/retro sound, since older games often used small bit depths and 'bit crushing' is a technique often used for that retro sound. Sample rate can go down to 24 kHz acceptably for human hearing, though not as ideal as 44.1/48 kHz. Lower than 24 kHz will start to sound like phone hold music, which is never nice.

Music will take up some space. Compared to music, sound effects usually take up minimal space but again you can still reduce the file size by re-encoding.

3D Models
What's there to gain or lose?
  • Choosing the right model format can net you a fair amount of savings.
  • If you lossy compress your OBJs too much, you will get nasty z-fighting.
For 3D Models, you should measure their size in both OBJ and B3D formats, unless your models have animation, in which case B3D is your only feasible option really.

ExeVirus created a compressor for OBJ which he claims that for simple models can make smaller file sizes than B3D: compress-obj. I would measure the size of your non-animated models in both B3D and OBJ compressed with compress-obj, and choose your format based on that. If you are still trying to squeeze for bytes, you can use compress-obj's lossy options.

LMD, on the other hand, claims that with a better exporter, B3D could be even smaller. Well, keep your eye out in case that eventuates, but even if it does, keep comparing OBJ and B3D size.

Locale files
What's there to gain or lose?
  • Not a lot to gain here, you'd be lucky to gain more than 1-2 filesystem blocks.
  • But if you try the hacky technique with numeric codes, there is definitely the possibility of ruining the experience for someone who joins the server with the target language.
This really shouldn't be a goal in most cases, but perhaps you really are hurting for some space. You would be unlikely to get a reduction of more than a single filesystem block by editing these.

You can remove any comment lines and any redundant newlines. Maybe modify the text for brevity. For somewhat sensible things, that's about it.

Now here's a hack for you: Now, I think these days Minetest supports translating from some other language into English, instead of assuming English is the source language. You might be able to shorten the source strings by making them numeric identifiers and this would reduce the size of all your locale files, but I haven't experimented with it to be sure that you wouldn't end up messing up at least one language. It would be a bit iffy, but technically you could choose something obscure like Kazakh (kk) as the source language but just put numbers in for the source strings, and then just hope no Kazakh players with their language set to Kazakh join... Can any expert tell me if there's a not-hacky way to do this?

Thankfully, the client-side translation features in Minetest 5.x are also bandwidth-optimised: Only the translation files that a client wants will be downloaded (ref).

Closing notes on disk usage
Of course, a lot of these lossy methods I just mentioned can have consequences to the asset quality, so you also have to weigh the loss of quality against file size savings.

For any file that is already less than a filesystem block in size, it's pointless to try to reduce its size because you can't reduce its disk usage. So don't worry about such small files.

Git Repository Size Optimisation

This is left until last because it is usually not as important. You can expect only powerusers or mod contributors to download the full git repository. There are a few tricks to save bandwidth for both downloaders and git authors.

Taking care about sizes
Let's start with the obvious: Reduce the number and size of your commits. That is not to say you should be afraid of changing your mod, but there are positive steps you can take to reduce the amount of churn in your repository. Sometimes the opposite is true: you should break your changes into logical steps with different commits - for instance, one commit to fix that latest bug, another one to update your README because it's gone out of date (but if you want to put the fact that you fixed the bug into your README, do that in the same commit as the bug fix - the key point is whether the changes are related).

Have a style guide and enforce it: tabs or spaces, when to indent, acceptable lua-isms, and so on. If the style guide is enforced properly there are no mixed tabs/spaces files or other things to annoy people, and they will not feel the need to change lines just to reformat them. This reduces 'diff noise', which is what happens when a commit introduces changes that aren't relevant to the actual code.

Consider squashing and rebasing before merge: Read about "rebase vs merge workflow" online. If you rebase your branch onto the target branch, there is no need for a merge commit, which reduces noise in the git log. If you use git rebase -i, you can squash commits together. This is very helpful because you can often make several 'work in progress' type commits and condense them down later when your changes have worked out to be good.

Avoid changes to media files as much as possible, including squashing any WIP versions out of existence. Git tracks the entire content of binary files in each revision, rather than just the diffs like it will do for text files. Exporting to OBJ can help with this because it's a text format and will be diffed quite easily by git unlike most of the file formats used for Minetest which are binary.

Media source files
You will usually have files that produce your final media files, but that you want to keep around as the sources for those files. You should still keep these version controlled, but they are often quite large. What should you do?

Exclude these media files from the downloaded mod size - potential mod contributors should always get your files through git. For the rest, they will download your files as archives (usually .zip or .tar.gz). You can mark files as export-ignore with a .gitattributes file. Read more about it with git help attributes and git help archive. ContentDB will follow your gitattributes when creating archives for your content; git hosts like GitHub and GitLab should also obey gitattributes. Also, you can run git archive yourself to create such archives; internally this is basically what ContentDB, git hosts and so on will be using.

You might also want to sanitise your media files. For instance, a lot of Inkscape and Blender project files will include external file references, which often use absolute paths, which might tell people your real name or just too much information about what kind of files you keep. Try to use relative paths instead of absolute to avoid this. You can remove paths and other metadata you don't like when exporting or with external tools such as metadata strippers or hex editors.

Another great reason to switch your media source files to relative paths is so that anybody who downloads your .blend file for instance can still use external assets. If the external assets have absolute paths, chances are the next person to open your project won't be able to find the files and it will be a big mess. External assets will save space compared to including them in the .blend, plus they can be edited externally.

Check if your media files support compression from within the application you used to make them. For instance, GIMP's XCF image format can be compressed, as can Blender's .blend file format. Since these are binary files either way, git will be storing the full version of them each time, so the savings are definitely important. Fair warning though for the paranoid: you won't be able to search for and see most of the file contents you would want to redact in a hex editor if the file is compressed.

Git submodules
Submodules are a feature in any recent version of git that allow you to manage git repositories as dependencies of your git repository; run git help submodule to learn more. I usually recommend git submodules for modpacks to manage dependencies properly, like Pandorabox's mod pack, but there are space saving you can make with them as well. For instance, you could have optional submodules not clone them every time, or if you only need them to be there but don't need every version, you can shallow clone them.

Using git lfs or similar
Take this advice with a grain of salt; I haven't actually used what I am recommending here, but I do like it in principle.

git lfs: This is a git plugin, short for "large file storage" that changes the way designated files are stored. It is provided with many installations of git such as Git for windows, but it is separately installable if you don't have it; on Linux and macOS Homebrew it might be a separate package.

Git LFS will save LFS-tracked files similar to a pointer to the file, rather than the entire file. This can drastically reduce the file download size of the git repository. When a commit is checked out, then LFS will fetch the full file from the LFS server.

There is one big caveat to LFS though: The git host has to support LFS, and may even have restrictions on LFS. GitHub, GitLab and Gitea support LFS, although your Gitea host/instance may not have it installed.

An alternative is git-annex, which supports many other non-git-related sources for its files, such as cloud storage hosts, rather than just being bound to the LFS server.

Shallow cloning
By default when cloning a git repository, all of the history of the default branch is retrieved, going all the way back to the initial commit. This is not a problem for small, short-running and small-footprint projects. However, it can be a huge issue for projects that are very long-running, that have big binary files, and have very frequent commits. For instance, it would be inadvisable to download the entire git history of the Linux kernel, which takes up more than a gigabyte.

Here's where shallow cloning comes in. You can restrict the download of git objects back to a certain number of commits, or back to a certain date and you can exclude branches or tags. Read more with git help clone at the terminal or git bash prompt. Relevant options are: --depth, --shallow-since and --shallow-exclude.
Last edited by Blockhead on Mon Jul 18, 2022 14:14, edited 9 times in total.
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

User avatar
rubenwardy
Moderator
Posts: 6969
Joined: Tue Jun 12, 2012 18:11
GitHub: rubenwardy
IRC: rubenwardy
In-game: rubenwardy
Location: Bristol, United Kingdom
Contact:

Re: Optimising mods for size

by rubenwardy » Post

Nice guide.

If you have large source files in your repository, you can use export-ignore in .gitattributes to not include it in the .zip file. As well as Git, ContentDB also supports this setting: https://content.minetest.net/help/packa ... ding-files
Renewed Tab (my browser add-on) | Donate | Mods | Minetest Modding Book

Hello profile reader

User avatar
LMD
Member
Posts: 1385
Joined: Sat Apr 08, 2017 08:16
GitHub: appgurueu
IRC: appguru[eu]
In-game: LMD
Location: Germany
Contact:

Re: Optimising mods for size

by LMD » Post

Thanks for the good writeup.
Blockhead wrote:
Mon Jul 04, 2022 05:06
Someone could write a PNG encoder though - there's already are already zlib/deflate libraries for Lua.
Yes, Minetest provides zlib/deflate (de)compression. I have written such a PNG encoder. On newer Minetest versions you should use minetest.encode_png though since it'll most likely be faster (since it's written in C++). In fact my encode_png implementation is mostly a polyfill for older MT versions.

In conclusion, TGA, GIF, JPG etc. could be gotten rid of entirely in favor of PNG.
Blockhead wrote:
Mon Jul 04, 2022 05:06
For 3D Models, you should measure their size in both OBJ and B3D formats, unless your models have animation, in which case B3D is your only feasible option really.
As I happen to have written a B3D reader & writer in Lua, I highly doubt that this comparison is fair. B3Ds could be smaller, but currently exporter issues prevent this: (1) some node names could be stripped to the minimum necessary (this wouldn't save much though, it's comparable to exe's obj minification tool stripping comments) (2) much more important, normals could be stripped to make the engine recalculate them based on winding order (3) most importantly, the B3D exporter currently has a bug that makes it export frames rather than keyframes.

In light of this, it might be viable to consider the .x model file format as an option, but TBH I don't have much experience with that.
My stuff: Projects - Mods - Website

User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Re: Optimising mods for size

by Blockhead » Post

rubenwardy wrote:
Mon Jul 04, 2022 10:22
Nice guide.

If you have large source files in your repository, you can use export-ignore in .gitattributes to not include it in the .zip file. As well as Git, ContentDB also supports this setting: https://content.minetest.net/help/packa ... ding-files
Good point, I'll be sure to make this more explicit in the next revision as being a part of .gitattributes and git archive
LMD wrote:
Mon Jul 04, 2022 11:30
Thanks for the good writeup.
You're welcome; I definitely have a forum habit of writing walls of text and it's better to sort them into an appropriate thread, where clearly they'll be better received.
LMD wrote:
Mon Jul 04, 2022 11:30
Minetest provides zlib/deflate (de)compression. I have written such a PNG encoder. On newer Minetest versions you should use minetest.encode_png though since it'll most likely be faster (since it's written in C++). In fact my encode_png implementation is mostly a polyfill for older MT versions.
Ah, thanks for reminding me. I knew there was more to the PNG vs TGA discussion that had lead to something but for some reason I didn't realise we had a full-blown encoder made due to it.
LMD wrote:
Mon Jul 04, 2022 11:30
In conclusion, TGA, GIF, JPG etc. could be gotten rid of entirely in favor of PNG.
Agreed with TGA. GIF isn't in irrlichtMt. JPEG I feel should stay for photorealistic games of say >=64px textures where texture filtering will be enabled. In future I'd actually like to see WebP support (not just for textures but for screenshots, and here on the forums).
LMD wrote:
Mon Jul 04, 2022 11:30
As I happen to have written a B3D reader & writer in Lua, I highly doubt that this comparison is fair. B3Ds could be smaller, but currently exporter issues prevent this.
Well, all I can do is cover the current state of affairs. Let me know when those issues are fixed and I'll be sure to update my advice. Until then I'll believe exe's claims of file size reduction.

LMD wrote:
Mon Jul 04, 2022 11:30
(1) some node names could be stripped to the minimum necessary (this wouldn't save much though, it's comparable to exe's obj minification tool stripping comments)
Marginal really, and in fact possibly counterproductive similar to stripping debug info out. Not that I think we have great error reporting in Minetest if a model load fails. For instance, in future I'd like to be able to use the material names from blender to assign my textues instead of the seemingly random order I have to list the textures in when I have multiple material slots.
LMD wrote:
Mon Jul 04, 2022 11:30
(2) much more important, normals could be stripped to make the engine recalculate them based on winding order
I'll admit I don't know much about 3D, but speaking to friend I thought it was not possible in general to recalculate the normals in general; but I could be wrong, Blender seems to do it pretty well. However, have you ever seen what happens with the current B3D exporter if you don't split your n-gons into triangles? You get wrong normals on most faces in your model, made worse by backface culling meaning you can't see most faces on the model. Not only am I worried more bugs like that might result from such stripping, I also wonder if the performance penalty is worth it. Even OBJ records face normals, so if those can also be reconstructed then that's also no advantage to B3D.
LMD wrote:
Mon Jul 04, 2022 11:30
(3) most importantly, the B3D exporter currently has a bug that makes it export frames rather than keyframes.
Good to hear there's a lot of room to improve there, but OBJ doesn't support animation at all - if you export animation with Blender's OBJ exporter you get one OBJ spat out per frame. So apples and oranges.
LMD wrote:
Mon Jul 04, 2022 11:30
In light of this, it might be viable to consider the .x model file format as an option, but TBH I don't have much experience with that.
I don't think many of us do. I think I've only seen one older mod with .x models. Also, a quick search leads me to believe exporter support for it is no better or even worse than B3D for Blender 2.8+; and I'd probably like to not stray too far into the territory of 'make your model in a hex editor' or 'hex edit your blender output to save 23 bytes per sector'.
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

User avatar
sirrobzeroone
Member
Posts: 593
Joined: Mon Jul 16, 2018 07:56
GitHub: sirrobzeroone
Contact:

Re: Optimising mods for size

by sirrobzeroone » Post

Also like the guide :)

Just adding a note/question on 3d meshes aren't these by nature tiny? When compared to say image files already as they are basically text? I think my current work item an updated mese monster with about 180 animation frames runs about 350 bytes in B3D format (which is 3x the size of the original x model). The texture(s) to skin him far exceed this by a factor of 10 or more and that's using a rather minimal 16px sized skin.
I normally spend more time trying to pack the texture for the model efficiently while trying to keep it making sense from a "I need to paint this thing later" view. In the above example packing the textures down I saved 16px x 64px swath of texture which was about a 20% saving on data size on the texture size originally - just a few kb's at 16px but it adds up :) .

Is my thinking right here? Would it be worth noting this for someone new following this guide that biggest gains in size savings at least for models can be made on textures/images first for a model and then do actual mesh optimization?

User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Re: Optimising mods for size

by Blockhead » Post

sirrobzeroone wrote:
Tue Jul 05, 2022 01:24
Also like the guide :)

Just adding a note/question on 3d meshes aren't these by nature tiny? When compared to say image files already as they are basically text?

biggest gains in size savings at least for models can be made on textures/images first for a model and then do actual mesh optimization?
It really depends on the complexity of the mesh and the number of meshes versus the number of distinct colours and their distribution in the image. PNG compresses large blocks of the same colour really well but if your textures have a lot of texture and distinct colours they won't compress very well; you'll just have to throw some information away.

As for models, while even a 16px texture can often be bigger than them, sometimes you will have a lot of models or a very complex one. I would generally agree with your sentiment by saying the biggest gains in a lot of mods can come from converting your images to a reduced indexed colour palette - with optipng to start, and with GIMP's indexed colour converter if you need to get serious. But there are many overcomplicated meshes as well.

I'll say that if someone designed an entire game around Soothing 32 or some other palette and tilesheeted almost everything, we could end up with some really lightweight games. I'll also be removing my advice about pngcrush, it seems worse than optipng despite the more imposing name; it actually seems to be related to the even older 'pngout' and some figures online about oxipng seem impressive.

Let's explore those ideas with some case studies. These are mostly train (obvious bias) and mob mods, which I think actually covers the built and natural environment and the general scope of models in Minetest pretty well.

Case Study 1: advtrains_train_track

The main textures are 'advtrains_dtrack_shared*', which are all around 7KiB, but they are also not optimised and contain the same pixels for the track and sleepers in everything instead of using a separate material slot for the "track equipment" part of the mesh (the boxes under the tracks in e.g. ATC tracks, station/stop tracks). The inventory images are all under a block in size. Finally there is 4.5 MiB of disk usage for models. For this mod, the biggest gain would be in optimising textures and using the 4 material slots that are already built into the track model properly.

Case Study 2: doxy_minitram / advtrains_train_zugspitzbahn

The mesh of the tram is quite complex and is 5.8 MiB big. It is way more complicated than typical for a block game. The base texture is only 336.5 KiB and the inventory image is 940 bytes; the texture is mostly flat colour so doesn't weigh much despite being 1024x1024 and not reusing texture sections. The tram has painting features and the textures for that are basically 1 bit per pixel with a palette of either flat white or 100% transparent; a ^multiply modifier applies the actual colour. These textures total 108 KiB on disk despite there being 11 of them. The biggest gain in this mod would be simplifying the geometry; for instance replacing the complex pantograph geometry with a series of planes and using texture transparency, which is something the Zugspitzbahn locomotive does and has a tiny texture and mesh (8KiB and 189KiB respectively).

Case Study 3: Knog the giant gorilla

The .b3d of Knog weights in at 148.4 KiB. If you copy Knog's mesh and export it without animation, the resulting b3d is 110.4 KiB but as OBJ it is 16.7 KiB, 16.5 KiB with exeVirus' compressor on default and 13.0 KiB with -precision 3. The lossy compressed version doesn't look any worse to me in Blender. This is hard to draw concrete conclusions from but it seems obj is a lot better for simple geometry. Also b3d, which is your only realistic option for animation, has about 35% overhead (at least given our current b3d exporter). for animating a mob with a few different actions

Meanwhile Knog's texture is only 30.8 KiB but can be reduced to 28 KiB if you run it through optipng, or 15.6 KiB if you reduce it to a 16-colour palette with GIMP. Before the gradients were added to try to add more fake shading, the texture weighed only 14.1 KiB or would have weighed 12.3 KiB if run through optipng. The gradients were mostly added as a misunderstanding I think. The lesson to learn from Knog is don't try to add fake shading with gradients if targeting >5.2ish when the entity shading was added.

Case Study 4: Advtrains subway / vs redo

The existing texture is only 2.1 KiB with almost entirely flat colours. My rework is 260.5 KiB which adds actual texture to the exterior, rubber lining to the windows, seat covers, visual control stands drawn into the texture at either end, and a realistic looking ceiling and floor. The livery texture for painting it different colours is 30.9 KiB or 15.7 KiB when run through optipng (note to self: remember that for later). Flat textures may be tiny but for the massive improvement in appearance I think the difference in size is really worth it.

The .blend is 1.3 MiB and the .b3d is 449.2 KiB, with separate door open and close animations for the doors on each side. An OBJ export of the model without animation weights in at 265.5 KiB or 259.9 KiB with exeVirus' compressor in lossless mode, or 207.0 KiB in precision=3 mode. A B3D export of the mesh without the animation is 449.2 KiB. Once again it looks like b3d has a significant overhead in geometry compared to OBJ but not a very big overhead for animation.

Case Study 5: Mobs Animal

Big caveat to note: I have a really old version of Mobs Animal I think (close to whatever LinuxForks uses). Still some points remain valid about how optimisation can work.

The animals all have walking, standing and death animations. The models directory is 1.2 MiB disk usage, with the biggest model being the cow at 294.5 and the smallest the rat at 6.6 KiB. Is it really a coincidence the biggest and smallest creates have the biggest and smallest file sizes? I don't know but it seems awfully convenient :) Perhaps it's just down to the cow having a lot of walking, grazing and dying animation information and the rat basically just being a few polygons sliding along the ground.

The textures directory is 49.9 KiB apparent size but 232.0 KiB disk usage, and includes variants for several mobs such as different cat and rat textures. Running optipng on all the textures reduces the apparent size to 46.9 KiB and disk usage to 224.0 KiB, which is 2 filesystem blocks saved. There are probably more savings to be made by tilesheeting the mob textures.

Case Study 6: Advtrains platforms

Advtrains has 4 types of platforms: straight high, straight low, 45° high and 45° low. The first two are Lua-defined nodeboxes and the latter two are b3ds. We'll be ignoring textures in this case because the textures come from whatever base mod provides the material the platform is made out, (well other than the yellow stripe, but those textures are less than a filesystem block anyway).

The 45° design comes from Och_Nö and I exported it to b3d without thinking much at the time because "surely a binary format will be smaller". Let's investigate that claim. The b3d models are 1.9 KiB each. A simple OBJ export will yield OBJs that are 2.9 KiB each. When run through exeVirus' compressor in lossless mode, the OBJs are 1.8 KiB, so basically 94% the size or 5% gain on what was already basically nothing. Not too impressive. When run in lossy mode with precision 2, the results were not great. Let's be generous and assume it's an import/export problem or error on my side which led to me losing the UV map information, the model being offset 0.5 too high and rotated 90 degree from where it should be. Even after those problems, what I ended up with in-game had very bad z-fighting with the adjacent nodes. And the file size? Just a smidge over 1.6 KiB. What have we learned? Format hardly matters for very small geometry, since these models all ended up at less than a filesystem block.
Last edited by Blockhead on Tue Jul 05, 2022 12:17, edited 2 times in total.
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

User avatar
sirrobzeroone
Member
Posts: 593
Joined: Mon Jul 16, 2018 07:56
GitHub: sirrobzeroone
Contact:

Re: Optimising mods for size

by sirrobzeroone » Post

Blockhead wrote:
Tue Jul 05, 2022 06:13
:)
Wow, lots of good info there, yes spot on re-checked my model I must have been having some sort of stroke before I missed the k so kb not bytes that puts it significantly bigger than my textures......even x4 textures. I'm going to go see what impact thinning a few excesses frames out has on file size, I'm quiet big on keeping things as light as I can although i do like to offset that with functionality. Im intrigued about the keyframe storage i need to go read up B3D format

The cows probably my fault I added death and lay downs to it which would have increased the size compared to pretty much unanimated rat - think i did that back in 2018/19.

User avatar
LMD
Member
Posts: 1385
Joined: Sat Apr 08, 2017 08:16
GitHub: appgurueu
IRC: appguru[eu]
In-game: LMD
Location: Germany
Contact:

Re: Optimising mods for size

by LMD » Post

Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 3: Knog the giant gorilla
The .b3d of Knog weights in at 148.4 KiB. If you copy Knog's mesh and export it without animation, the resulting b3d is 110.4 KiB but as OBJ it is 16.7 KiB, 16.5 KiB with exeVirus' compressor on default
This is wrong. Exporting Knog without animation yields a b3d with 18.7 KiB size from my tests using modlib's b3d reader & writer (the attached file isn't really a .zip but I didn't want to archive a single file just to be able to attach it, simply rename it to .b3d). That said, even re-exporting Knog with my writer significantly reduces the size. I suspect the b3d exporter is storing zero weights.
Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 4: Advtrains subway / vs redo
The .blend is 1.3 MiB and the .b3d is 449.2 KiB, with separate door open and close animations for the doors on each side. An OBJ export of the model without animation weights in at 265.5 KiB or 259.9 KiB with exeVirus' compressor in lossless mode, or 207.0 KiB in precision=3 mode. A B3D export of the mesh without the animation is 449.2 KiB. Once again it looks like b3d has a significant overhead in geometry compared to OBJ but not a very big overhead for animation.
I suspect you're not stripping the animations properly. Granted, door open / close animations can be expected to be pretty lightweight, but less than 50 (probabilistic avg) / 100 (upper bound) bytes? I highly doubt it.
Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 6: Advtrains platforms
The 45° design comes from Och_Nö and I exported it to b3d without thinking much at the time because "surely a binary format will be smaller". Let's investigate that claim. The b3d models are 1.9 KiB each. A simple OBJ export will yield OBJs that are 2.9 KiB each. When run through exeVirus' compressor in lossless mode, the OBJs are 1.8 KiB, so basically 94% the size or 5% gain on what was already basically nothing.
I think a well-minified B3D might beat that OBJ filesize, yet you should use OBJ here simply because you don't need a file format that supports animations; you can comfortably go with the simpler one with better support here.

Finally, a note on the "lossy" mode of exe_virus obj compressor: Don't ever use it. Lossy practically means reducing float precision here; it's literally rounding to decimal places. This is worsened by the fact that Minetest uses 10x model scale, so any error will be multiplied by a factor of 10. precision = 1 will create errors of avg 5m, precision = 2 0.5m, 3 - 0.05m, 4 - 0.005m, 5 - 0.0005m. This is what leads to Z-fighting. Only levels 4 - 5 might be acceptable (but shave off at most two bytes from a number that uses at least 6 bytes, so the savings are negligible); at least re-"compressing" won't worsen the situation.
Attachments
knog_noanim.b3d.zip
(18.26 KiB) Downloaded 36 times
My stuff: Projects - Mods - Website

User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Re: Optimising mods for size

by Blockhead » Post

LMD wrote:
Tue Jul 05, 2022 14:00
Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 3: Knog the giant gorilla
This is wrong. Exporting Knog without animation yields a b3d with 18.7 KiB size from my tests using modlib's b3d reader & writer ...
My result is the result of copying and pasting it into another Blender window. I didn't even know about your b3d code, but clearly it's much better for file size. I was just using the tools available to me. I also don't know enough about animation to really speak with authority there, so maybe I will refrain unless I learn more in future.
LMD wrote:
Tue Jul 05, 2022 14:00
Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 4: Advtrains subway / vs redo
I suspect you're not stripping the animations properly. Granted, door open / close animations can be expected to be pretty lightweight, but less than 50 (probabilistic avg) / 100 (upper bound) bytes? I highly doubt it.
Sorry that's a typo. Actually I think we have lost the original .blend of the subway wagon, or at least the latest version of it, and the greenxenith b3d importer strips all animation and seems to lose normal information because everything ends up shaded sharp instead of the smooth in the actual model. Re-exporting this with greenxenith's gives a file of 224K, probably less with yours.
Blockhead wrote:
Tue Jul 05, 2022 06:13
Case Study 6: Advtrains platforms
LMD wrote:
Tue Jul 05, 2022 14:00
I think a well-minified B3D might beat that OBJ filesize, yet you should use OBJ here simply because you don't need a file format that supports animations; you can comfortably go with the simpler one with better support here.
All of it doesn't really matter to me because it's all < 4K in size, so on most systems it's no difference to disk usage. Also another advantage of OBJ nobody has pointed out is git can actually text diff it making minor changes to the model easier on your repo's disk usage.
LMD wrote:
Tue Jul 05, 2022 14:00
Finally, a note on the "lossy" mode of exe_virus obj compressor: Don't ever use it. ... This is worsened by the fact that Minetest uses 10x model scale ...
You can set the visual scale to (1,1,1) to work around that, but that's not often used and applying it retroactively would be a pain. Why does Minetest even have that scale applied anyway? It's quite literally the first problem anyone modelling for Minetest encounters. It makes things harder if you're drawing with actual measurements like 250 mm this, 30 mm that... Not everyone knows how to set it up Blender properly to deal with such things. I agree lossy mode is a bad idea, for reasons both you and I have pointed out: huge error, z-fighting...
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

Blockhead wrote:
Mon Jul 04, 2022 05:06
The main texture format in Minetest is PNG. There is also support for JPEG, BMP and TGA. That last one got particularly controversial at one point.

My advice? Don't use BMP or TGA unless it's for a tiny, tiny file - even though a lot of files are going to be smaller than 4K. While BMP and TGA do support basic RLE compression when created by some programs, PNG's compression is going to be better in most cases. There's also no guarantee Minetest will handle the combination of palettes and RLE scheme used correctly.
Having investigated image file formats quite a bit, I want to add some information to paint a better picture (hehe):

1. BMP is a complex image format for its modest feature set, the complexity does not really benefit anyone that much. It has implementation weirdnesses like having to add padding bytes so the number of bytes in a scanline is divisibly by four. In fact, BMP is so needlessly complex, that if someone suggests to use BMP for a texture in a game, that person has probably no idea of texture formats. Go look at the Wikipedia article or the BMP parsing code in Irrlicht if you doubt any of this.

2. TGA is the simplest binary texture format that is still in use in contemporary games (e.g. Minecraft). This usually surprises people who do not know much about texture file formats, but the reason is that it has a very usable feature set for its minimal structure – first comes a 20 (or so) byte header, then (raw / colormapped / RLE-encoded) pixels, at last the footer. One idiosyncrasy is that you can specify if the image starts in the bottom left or top left corner. It is also simple enough that it can be verified manually with a hex editor that an image is correctly decoded and that you can do in-place updates without re-encoding.

This simplicity in implementation comes at the cost of worse compression (only run-length encoding), but this is actually barely relevant for small images. Note that the TGA footer is technically optional, but Minetest requires it (and it has to stay that way to not create confusion). I do not think Minetest handles the combination of RLE and colormaps at all and I consider it unfortunate, because it is a really good compression at low complexity. However, adding it now would a) add more code that would be rarely used b) risk that TGA images created for newer versions of Minetest would not work in older versions, so I think it should not be done.

Some time back, I extended the tga_encoder mod that Fleckenstein made for MineClone2, it is useful if you want to create small images within Minetest dynamically. Its documentation contains about when TGA might be a good choice of image format even if you can have more advanced formats like PNG: https://content.minetest.net/packages/e ... a_encoder/

3. PNG is a complex and very widely used and supported texture format with many features, some of which are not guaranteed to work across implementations. For example, Minetest will mishandle gamma information (i.e. colors appear with the wrong brightness if you are relying on that) and will not play animations (animated PNG is an unofficial extension of PNG that is rejected by PNG creators, but supported by many browsers). If you want to figure out if a feature works in Minetest, you can find test files here: http://www.schaik.com/pngsuite/

PNG files can be very well compressed. However, good compression is not a property of the file format, but of the encoding process. Also, many people forget the relatively huge overhead of about 70 bytes until PNG starts encoding a single pixel. Many PNG files included in minetest_game have a size of between 100 and 200 bytes – so you can see how good the compression must be to fit a 16×16 image (256 pixels!) in the remaining bytes once you mentally subtract 70 bytes of overhead.

Because of this overhead, regardless of how good the compression is, you can actually easily beat PNG on filesize even just by dumping raw pixels for every texture that is small enough to be encoded in 69 bytes or less in any other file format. The lesson here is that if you really want to minimize filesize at any cost, you should not only look at how good the compression is, but take note of the overhead of the file format.

Realistically, the difference in filesizes between PNG and TGA tilts towards TGA at sizes of about 8×8 or less and towards PNG at sizes of about 16×16 or more. If you only consider filesize, any of both is a usable choice for a game.
Blockhead wrote:
Mon Jul 04, 2022 05:06
However, there are even more reasons not to use TGA now. mcl_maps, part of Mineclone2/5, was using TGA for these Mineclone5 maps is because TGA is an easier format to encode from Lua than PNG.
AFAIK the maps mod in MineClone2 (and Mineclonia) still uses TGA. The lead developer of MineClone5 – kay27 – changed the mod to use minetest.encode_png() in the MineClone5 version – and also changed other things in the mod against the advice of literally everyone else who had previously worked on it (mainly Fleckenstein and me). These actions broke every existing map on every MineClone5 server. Kay27 pointed out that you could fix that with a shell script, but then forgot to apply it to his own server, until someone pointed it out. Also there was a race condition with maps that was unfixed for weeks and they did not work on Windows for some time … I do not want to slander MineClone5 too much, but it is a project that is following the philosophy “move fast and break things”, (even when it is not necessary to actually break things).

If anyone reading this want to know how to avoid the issues I just mentioned, I have made a standalone mod named xmaps that you can try out: https://content.minetest.net/packages/erlehmann/xmaps/
Blockhead wrote:
Mon Jul 04, 2022 05:06
The Minetest devs had almost forgotten/never knew anyone was using TGA, and ripped it out to reduce how much code they had to manage. Well, the easiest way to find out if somebody uses some feature is to remove it and see if any complaints come in.
I remember it differently: Basically, the Minetest developers forked the Irrlicht library (a 3D rendering library). A reason is that Minetest was not using many of Irrlichts features. I still think that Minetest should have made better use of Irrlicht while it lasted – Irrlichts real-time shadows and particle systems are much more performant and require much less capable hardware than what Minetest contains, but no one did use those things in years, so I guess that option is gone for good now. Anreason is that Minetest developers want to change a lot about rendering and have been talking about dropping support for older hardware – but when you are not interested in contributing upstream the easiest route for that is to fork something instead of discussing with Irrlicht developers who sometimes even test if their code still runs on Windows 98.

Anyways, at some point, hecktest removed everything that hecks thought was obsolete and in the process ignored the multiple hints by MineClone2 developers that TGA was used (tga_encoder mod was created by Fleckenstein for MineClone2) – and could not be easily replaced, since it was used for MineClone2 maps, which are probably the hardest to craft item among all mods … as players have to arrange nodes in a huge area in the game to make the handheld map show a view from above.

(I remember someone™ actually did the “you can use BMP instead of TGA” thing at some point, demonstrating their file format knowledge. I wish it was always so easy to notice when someone has no clue.)

To be fair: I think removing code you do not ever need can be a good idea. After all, deleted code is debugged code. The way this was done here though was an absolute shit show: Not only were legitimate complaints ignored, even tests were removed instead of repaired when the test failed due to something being removed. This whole changeset – a removal of about 200k lines of code – was approved in a day or so … which suggests to me that no one really reviewed or discussed it. I think that the correct way to do such a thing is piece-by-piece and also taking objections seriously.

After that, CDB was searched to figure out if other file formats were still in use. I actually advocated to remove BMP, but IIRC someone had used it. Thankfully, nothing else was found that was still in use.
Blockhead wrote:
Mon Jul 04, 2022 05:06
Long story short, a PNG encoder was written in C++ and added to Minetest so that the excuse of "It's easier to encode" for TGA no longer rang true, but TGA support was also added back.
I actually suspected that minetest.encode_png() and the [png texture modifier (which is premium hot garbage and probably the single worst API addition in Minetest by a large margin in the last 2 years or so) were added as a response to that, so that people could make maps easily, but IIRC sfan5 disputed that at one point on IRC.

Anyways, I consider minetest.encode_png() a quite poor API. It looks quite useful at first – but so does a fridge magnet that turns out to not be magnetic enough to hold its own weight … a big problem is that minetest.encode_png() only ever does RGBA and does not use prefilters. This means it does not take advantage of PNG well, which shows in the filesize: The devtest mod contains example code to generate a checkerboard texture and save it to a PNG file. That file can be reduced by optipng to 5% of its size. This drives home that PNG has really good compression – but just using PNG is not enough.

This means that if your mod creates dynamic textures, but you want or need anything else than RGBA pixels shoved into zlib/deflate, it is not exactly the best idea to use minetest.encode_png(). In my xmaps mod for example, I used a colormap with the same color occuring multiple times for different features, so I can recolor it later. If Minetest would not leak memory like crazy if I did that, I would also have tried to use the in-place editing capability of TGA already to edit xmaps textures without having to re-encode them for a partial reveal of maps.

You can look at the following code of tga_encoder to see what minetest.encode_png() is lacking: https://git.minetest.land/erlehmann/tga ... amples.lua

Obviously, often the files created by tga_encoder are larger than a properly encoded PNG file would be. But as Minetest does not provide an API as capable as tga_encoder and probably will not do in the foreseeable future, I suggest to use tga_encoder for small dynamic images. The use of TGA in the maps mods is obviously not because of the compression, but because of compatibility and ease of use of TGA.
Blockhead wrote:
Mon Jul 04, 2022 05:06
For textures, optipng, oxipng and similar programs are common for PNG format.
The program pngcrush also exists, but I do not know how it is meaningfully different.

FYI, my usual optipng invocation is: optipng -O7 -strip all $FILENAME

This removes the metadata in the PNG image, which can save a lot of bytes. I once encountered several PNG images in MineClone2 that had several kilobytes of metadata, while their pixel payload was much less. That was a bit weird.
Blockhead wrote:
Mon Jul 04, 2022 05:06
For something more extreme you can try opening the file in GIMP, and converting to indexed colour with a small palette size (say 8-32 colours, depending on texture complexity), then exporting with minimal metadata (untick most/all of the export option boxes).
Converting to indexed color can lead to bands of solid color showing up. Converting images with dithering to avoid this “banding” is often given as a good tip for better compression, but it actually often makes images compress worse, as with dithering there are fewer repeated runs of pixels than if you have visible banding. So if you see it as a tip, you can – similar to the BMP thing – safely assume whoever gives it to you is interested in “looking retro” much more than good compression. You can find such suggestions often on websites of “solarpunk” web developers.

However, note that when you are using 16×16 pixels, the color palette will always fit into one byte. So you are guaranteed not to lose any information by simply converting the image to a colormapped one at that size or lower. You are not guaranteed a lower filesize though, as the colormap takes up space too – it depends on how often pixels are repeated. In the worst case where you have colored noise with every pixel being different, no compression algorithm is going to help much.

I suggest to always strip PNG metadata using optipng after saving. Otherwise, the file might still contain some unwanted information. Just be aware that for bigger images it can take a long time and ultimately the savings may not be worth it.
Blockhead wrote:
Mon Jul 04, 2022 05:06
You can also put any images that share an indexed colour palette into the same file and use Minetest's tilesheet functionality. This will save disk usage if the apparent size of any file would be less than 4K
It can indeed be a good idea to have a single texture like that – in fact, many games do it like that. However, I have rarely seen it being employed. Do you have examples of tilesheet usage in mods? I wish to learn more about it.
Blockhead wrote:
Mon Jul 04, 2022 05:06
JPEG would be suitable in case you are using high resolution textures and want to save space - you can tune the JPEG quality according to how you want your space/quality tradeoff to go.
A tip I have for JPEG compression is to reduce the quality of the color information. JPEG already saves color information in a lower resolution than contrast information, but at least in GIMP you can turn it down a bit more.

Blockhead wrote:
Mon Jul 04, 2022 05:06
Compared to music, sound effects usually take up minimal space but again you can still reduce the file size by re-encoding. The space/quality tradeoff is similar to JPEG because OGG is also a lossy compression.
It should be mentioned that there exist some sounds that can not be well compressed using Ogg Vorbis. For example, the entire genre of bytebeat music made by very small computer programs sounds awful or bloats up in size if you re-encode the typical 8kz mono bytebeat as Ogg Vorbis. This means you should listen to results and look at the filesize.

Re-encoding from a lossy file can introduce audible artifacts. As a mod developer or game developer you should always keep the version of the file that you would edit. This was mentioned in the quoted post, but I want to emphasize it again here.
Last edited by erlehmann on Mon Jul 18, 2022 10:28, edited 1 time in total.
cdb_b9da8bbc6338

User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Re: Optimising mods for size

by Blockhead » Post

erlehmann wrote:
Mon Jul 18, 2022 01:25
Having investigated image file formats quite a bit, I want to add some information to paint a better picture (hehe):
Thank you for your extended discussion of TGA, BMP and PNG erlehmann. The story is obviously more complicated than my summary of it. However, I simply couldn't justify writing a longer version of the story because the more detail I include about the story the more it's just going to make it a history lesson and not a practical guide with the right degree of brevity. I still feel like my account is acceptably accurate and I did try to be even-handed. I'll add a direct link to your post so people can jump to your longer account of history.

I think if you want me to update the advice you should give me a clear and concise statement. People need more than a history lesson, they need practical advice and that's what the thread is for. Would agree with the statement "you may want to use TGA via tga_encoder for small dynamic media, because that saves bandwidth"? Otherwise I cannot really justify advising against PNG because
  1. TGA still seems only worth it for files less than a filesystem block (4KiB), for which apparent size is a moot point for disk usage and negligible for network transfer.
  2. The PNG encoder may be improved in future, even if you think this is unlikely, which would mean I need to change my advice. I don't want to seem lazy but I'd rather keep my advice this way and hold my breath..
  3. Most people author art assets solely in their image editor, 3D modeler and so on. The contents of the guide should actually be followable by someone who is not a programmer, but we may assume someone who can use a command line program such as the aforementioned optipng. "No programming needed" may sound counter-intuitive since we're talking about mod authoring, but media authoring and code authoring are two separate tasks that can be assigned within a team, and asset authors who will depend on others to write code that uses their assets should provide size-optimised assets. Following this reasoning I can't advise people use your tga_encoder unless you provide an easy way to run it via lua/luajit command line program (or shebang on UNIX-likes). Currently one has to dofile your tga_encoder and call its API in order to use it outside Minetest.
Blockhead wrote:
Mon Jul 04, 2022 05:06
For textures, optipng, oxipng and similar programs are common for PNG format.
erlehmann wrote:
Mon Jul 18, 2022 01:25
The program pngcrush also exists, but I do not know how it is meaningfully different.

FYI, my usual optipng invocation is: optipng -O7 -strip all $FILENAME

This removes the metadata in the PNG image, which can save a lot of bytes. I once encountered several PNG images in MineClone2 that had several kilobytes of metadata, while their pixel payload was much less. That was a bit weird.
I mentioned pngcrush in my previous posts; it seems to be an older program than optipng and not useful now that optipng and oxipng exist. oxipng results vary and can either be better or worse than optipng according to benchmarks. I'll add your optipng invocation advice to the OP.
Blockhead wrote:
Mon Jul 04, 2022 05:06
For something more extreme you can try opening the file in GIMP, and converting to indexed colour with a small palette size (say 8-32 colours, depending on texture complexity), then exporting with minimal metadata (untick most/all of the export option boxes).
erlehmann wrote:
Mon Jul 18, 2022 01:25
Converting to indexed color can lead to bands of solid color showing up. Converting images with dithering to avoid this “banding” is often given as a good tip for better compression, but it actually often makes images compress worse
Good point, I will update my advice to explicitly state you should not use dithering if you want to actually reduce the file size and link to your post for more detail.
erlehmann wrote:
Mon Jul 18, 2022 01:25
Do you have examples of tilesheet usage in mods? I wish to learn more about it.
Sadly not, but the API is dead simple so I'm sure as long as you had art assets you could play around with it. You could look at what the results would be if you refactored Minetest Game to use a spritesheet of Zughy's Soothing 32 texture pack (which uses a 32-colour palette) instead of using individual textures for each node as it does now. Just pack all the node textures into one file with that 32-colour palette and apply the tilesheet to the texture definitions of each node.
erlehmann wrote:
Mon Jul 18, 2022 01:25
A tip I have for JPEG compression is to reduce the quality of the color information. JPEG already saves color information in a lower resolution than contrast information, but at least in GIMP you can turn it down a bit more.
My advice rephrased is to use the quality slider when exporting. Is your advice the same, or is it to reduce the bits per channel? Can that be done in GIMP? I thought most images were authored in 8 bit per channel precision, and GIMP does not have any options less than 8-bit precision under its Image -> Precision menu, nor can I see other quality options in the JPEG export window in GIMP that seem to exactly match what you mean. There are some interesting settings under advanced though I don't understand them.
Blockhead wrote:
Mon Jul 04, 2022 05:06
Compared to music, sound effects usually take up minimal space but again you can still reduce the file size by re-encoding. The space/quality tradeoff is similar to JPEG because OGG is also a lossy compression.
erlehmann wrote:
Mon Jul 18, 2022 01:25
It should be mentioned that there exist some sounds that can not be well compressed using Ogg Vorbis. For example, the entire genre of bytebeat music made by very small computer programs sounds awful or bloats up in size if you re-encode the typical 8kz mono bytebeat as Ogg Vorbis. This means you should listen to results and look at the filesize.
Well yes, if the master format is something like MIDI or an old school tracker program format (sorry I don't know much about them) then that's going to be a lot smaller, but Minetest can't play that back. I'll add the advice that I neglected that of course you need to listen to your files back (but that really that applies for any lossy system), and maybe a short note that your OGG may be larger than your authoring formats.
erlehmann wrote:
Mon Jul 18, 2022 01:25
Re-encoding from a lossy file can introduce audible artifacts. As a mod developer or game developer you should always keep the version of the file that you would edit. This was mentioned in the quoted post, but I want to emphasize it again here.
I don't quite think I emphasised it enough so I will try to improve that at the start where I discuss lossy compression and probably later in the section about source assets.
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

Blockhead wrote:
Mon Jul 18, 2022 04:42
erlehmann wrote:
Mon Jul 18, 2022 01:25
Having investigated image file formats quite a bit, I want to add some information to paint a better picture (hehe):
Thank you for your extended discussion of TGA, BMP and PNG erlehmann. The story is obviously more complicated than my summary of it. However, I simply couldn't justify writing a longer version of the story because the more detail I include about the story the more it's just going to make it a history lesson and not a practical guide with the right degree of brevity. I still feel like my account is acceptably accurate and I did try to be even-handed. I'll add a direct link to your post so people can jump to your longer account of history.
Thanks.
Blockhead wrote:
Mon Jul 18, 2022 04:42
I think if you want me to update the advice you should give me a clear and concise statement. People need more than a history lesson, they need practical advice and that's what the thread is for. Would agree with the statement "you may want to use TGA via tga_encoder for small dynamic media, because that saves bandwidth"?
Every 100% correct statement that I can give is either subtly wrong or almost practically useless.
Stuff like … monochrome bitmap with dimensions 6×6 or less will be smaller as a TGA, always.

One issue with compression is that logic dictates that not all inputs can become smaller .
t is easy to construct pathological cases. Ordinary users are not really interested in them.

Btw, what is small? Are you generating a 7×7 bitmap? TGA probably saves some bandwith.
Are you generating a 10×10 monochrome bitmap? It could be that TGA saves some bandwith.
Are you generating a 16×16 texture? A TGA file is likely larger than a PNG file, but sometimes not.

Two tricks to minimize transfer size are format-independent, but only tga_encoder can do them currently AFAIK:

1. Using colormaps usually saves 2 bytes per pixel compared to 24bpp. It depends on how many colors you have.
2. Using 16bpp colors. It saves an additional 1 byte per color in a colormapped image, or 1 byte per pixel in a raw image.
Blockhead wrote:
Mon Jul 18, 2022 04:42
Otherwise I cannot really justify advising against PNG because
  1. TGA still seems only worth it for files less than a filesystem block (4KiB), for which apparent size is a moot point for disk usage and negligible for network transfer.
I agree that for anything bigger than a filesystem block, TGA usage has to be justified with something else than filesize.

However, an interesting case can be made to use TGA for minimizing the size of the zip file download of a mod:
My example code generates a 16×16 image bitmap and saves it as an uncompressed and compressed TGA.
I took these bitmaps and converted them to PNG and then used “optipng -O7 -strip all” to optimize the PNG.

The file sizes in bytes are:

309 gradients_24bpp_raw.png
812 gradients_24bpp_raw.tga
309 gradients_24bpp_rle.png
484 gradients_24bpp_rle.tga

No surprises here – PNG is a better choice for this particular 16×16 RGB image.

Well, the command “du -h” confirms that each of those files is using the same storage, 4k.
However, when I zipped the files (as if they were published as a mod), I got these sizes:

421 gradients_24bpp_raw.png.zip
383 gradients_24bpp_raw.tga.zip
421 gradients_24bpp_rle.png.zip
380 gradients_24bpp_rle.tga.zip

The difference is a whopping 50 bytes … you can get even larger differences when adding more files per zip file.
So while reported byte size favors PNG, this seems like a good case for distributing small textures as TGA if your download is zipped– which is exactly what game developers have been doing all the time for 20 years or so.

In xmaps I store the map image in item meta in TGA format, compressed using minetest.compress() (i.e. deflate).
Blockhead wrote:
Mon Jul 18, 2022 04:42
[*]The PNG encoder may be improved in future, even if you think this is unlikely, which would mean I need to change my advice. I don't want to seem lazy but I'd rather keep my advice this way and hold my breath..
I think such advice should depend on your use case and on how good the available tooling is.
In the case of minetest.encode_png() vs tga_encoder, you really have to benchmark the thing.

I personally suggest to favor tga_encoder for transfer size for every bitmap of size 8×8 and below, regardless of how good minetest.encode_png() becomes. But as long as you can measure the result in filesystem blocks, there is really not much of a difference – though you absolutely should compress and base64 encode TGA images when stored in item meta or similar places.
Blockhead wrote:
Mon Jul 18, 2022 04:42
[*]Most people author art assets solely in their image editor, 3D modeler and so on. The contents of the guide should actually be followable by someone who is not a programmer, but we may assume someone who can use a command line program such as the aforementioned optipng. "No programming needed" may sound counter-intuitive since we're talking about mod authoring, but media authoring and code authoring are two separate tasks that can be assigned within a team, and asset authors who will depend on others to write code that uses their assets should provide size-optimised assets. Following this reasoning I can't advise people use your tga_encoder unless you provide an easy way to run it via lua/luajit command line program (or shebang on UNIX-likes). Currently one has to dofile your tga_encoder and call its API in order to use it outside Minetest.
[/list]
There is no complicated conversion needed – a lot of painting programs like GIMP and mtpaint support TGA natively.
cdb_b9da8bbc6338

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

Blockhead wrote:
Mon Jul 18, 2022 04:42
erlehmann wrote:
Mon Jul 18, 2022 01:25
Do you have examples of tilesheet usage in mods? I wish to learn more about it.
Sadly not, but the API is dead simple so I'm sure as long as you had art assets you could play around with it. You could look at what the results would be if you refactored Minetest Game to use a spritesheet of Zughy's Soothing 32 texture pack (which uses a 32-colour palette) instead of using individual textures for each node as it does now. Just pack all the node textures into one file with that 32-colour palette and apply the tilesheet to the texture definitions of each node.
Sorry, without any example I consider this advice baseless and useless.
Blockhead wrote:
Mon Jul 18, 2022 04:42
erlehmann wrote:
Mon Jul 18, 2022 01:25
A tip I have for JPEG compression is to reduce the quality of the color information. JPEG already saves color information in a lower resolution than contrast information, but at least in GIMP you can turn it down a bit more.
My advice rephrased is to use the quality slider when exporting. Is your advice the same, or is it to reduce the bits per channel? Can that be done in GIMP? I thought most images were authored in 8 bit per channel precision, and GIMP does not have any options less than 8-bit precision under its Image -> Precision menu, nor can I see other quality options in the JPEG export window in GIMP that seem to exactly match what you mean. There are some interesting settings under advanced though I don't understand them.
For subsampling, select “4:2:0 (chroma quartered)”. This further reduces the color resolution and should affect filesize.
Blockhead wrote:
Mon Jul 18, 2022 04:42
Blockhead wrote:
Mon Jul 04, 2022 05:06
Compared to music, sound effects usually take up minimal space but again you can still reduce the file size by re-encoding. The space/quality tradeoff is similar to JPEG because OGG is also a lossy compression.
erlehmann wrote:
Mon Jul 18, 2022 01:25
It should be mentioned that there exist some sounds that can not be well compressed using Ogg Vorbis. For example, the entire genre of bytebeat music made by very small computer programs sounds awful or bloats up in size if you re-encode the typical 8kz mono bytebeat as Ogg Vorbis. This means you should listen to results and look at the filesize.
Well yes, if the master format is something like MIDI or an old school tracker program format (sorry I don't know much about them) then that's going to be a lot smaller, but Minetest can't play that back. I'll add the advice that I neglected that of course you need to listen to your files back (but that really that applies for any lossy system), and maybe a short note that your OGG may be larger than your authoring formats.
Just a note, this was not about tracker music, but low-bitrate WAV files that can not be well compressed using Vorbis.
cdb_b9da8bbc6338

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

Since you want cold hard truth, I made some estimations:

Any square bitmap will always fit in a filesystem block encoded as a TGA, if …

• … it is uncompressed, monochrome and is of size 63×63 or less.
• … it is colormapped with16bpp colors (1 bit A, 5 bit for each of R/G/B) and is of size 59×59 or less.
• … it is colormapped with 24bpp colors (8 bit for each of R/G/B) and is of size 57×57 or less.
* … it is colormapped with 32bpp colors (8 bit for each of R/G/B/A and is of size 55×55 or less.
• … it is uncompressed with16bpp colors and is of size 45×45 or less.
• … it is uncompressed with 24bpp colors and is of size 36×36 or less.
• … it is uncompressed with 32bpp colors and is of size 31×31 or less.

At those sizes, I expect most images to compress much better using a PNG encoder, but I am not sure how bad minetest.encode_png() is and I do not care enough to find out – as I said before, for larger textures TGA is used despite its filesize and because of other reasons.
cdb_b9da8bbc6338

User avatar
Blockhead
Member
Posts: 1602
Joined: Wed Jul 17, 2019 10:14
GitHub: Montandalar
IRC: Blockhead256
In-game: Blockhead Blockhead256
Location: Land Down Under
Contact:

Re: Optimising mods for size

by Blockhead » Post

erlehmann wrote:
Mon Jul 18, 2022 12:25
Every 100% correct statement that I can give is either subtly wrong or almost practically useless.
Stuff like … monochrome bitmap with dimensions 6×6 or less will be smaller as a TGA, always.

Btw, what is small? Are you generating a 7×7 bitmap? TGA probably saves some bandwith.
Are you generating a 10×10 monochrome bitmap? It could be that TGA saves some bandwith.
Are you generating a 16×16 texture? A TGA file is likely larger than a PNG file, but sometimes not.
erlehmann wrote:
Mon Jul 18, 2022 12:58
Since you want cold hard truth, I made some estimations:
...
Well, it looks like we have arrived at some rules of thumb at least (the first quote). That's how they are to be understood mostly. As ever you have to test everything, but rules of thumb have some value at least. The second quote, the 'cold hard truth' is, as ever, quite complicated. The key word here is 'generating', an application I did not consider as much as manual content authoring when writing the OP. You have made your point quite well for TGA's applications in generated images.
erlehmann wrote:
Mon Jul 18, 2022 12:25
Two tricks to minimize transfer size are format-independent, but only tga_encoder can do them currently AFAIK:

1. Using colormaps usually saves 2 bytes per pixel compared to 24bpp. It depends on how many colors you have.
2. Using 16bpp colors. It saves an additional 1 byte per color in a colormapped image, or 1 byte per pixel in a raw image.
We'll get back to this later. But the fact tga_encoder has these capabilities makes a better use case for a friendlier UI for tga_encoder as a standalone program.
erlehmann wrote:
Mon Jul 18, 2022 12:25
However, an interesting case can be made to use TGA for minimizing the size of the zip file download of a mod:
My example code generates a 16×16 image bitmap and saves it as an uncompressed and compressed TGA.
I took these bitmaps and converted them to PNG and then used “optipng -O7 -strip all” to optimize the PNG.

The file sizes in bytes are:

...

No surprises here – PNG is a better choice for this particular 16×16 RGB image.

Well, the command “du -h” confirms that each of those files is using the same storage, 4k.
However, when I zipped the files (as if they were published as a mod), I got these sizes:

...

The difference is a whopping 50 bytes … you can get even larger differences when adding more files per zip file.
So while reported byte size favors PNG, this seems like a good case for distributing small textures as TGA if your download is zipped– which is exactly what game developers have been doing all the time for 20 years or so.

In xmaps I store the map image in item meta in TGA format, compressed using minetest.compress() (i.e. deflate).
(emphasis mine): A Transfer vs storage argument? Interesting, but as you said, a "whopping 50 bytes"... and really feels like just a moot point when you might have both forms - contentDB cache or manual downloaded archive - and the decompressed copy on disk. They practically seem to cancel out.

I feel like we should be moving to something like zstd (.tar.zstd) compression by the way.. but of course that's nowhere near as portable and not natively extractable on older operating systems without new software. Would it matter if it were just within ContentDB and the client, and ContentDB's web interface still served ZIP?
erlehmann wrote:
Mon Jul 18, 2022 12:25
I personally suggest to favor tga_encoder for transfer size for every bitmap of size 8×8 and below, regardless of how good minetest.encode_png() becomes. But as long as you can measure the result in filesystem blocks, there is really not much of a difference – though you absolutely should compress and base64 encode TGA images when stored in item meta or similar places.
Item meta and other dynamic, low-resolution, low-bit depth applications is an interesting point in favour of TGA.. I'll see if I can work that in as a suggestion, though I'm not sure where to fit it just yet.
erlehmann wrote:
Mon Jul 18, 2022 12:25
Blockhead wrote:
Mon Jul 18, 2022 04:42
Most people author art assets solely in their image editor -snip -
There is no complicated conversion needed – a lot of painting programs like GIMP and mtpaint support TGA natively.
Unfortunately they don't support exporting with the features that tga_encoder has like 16bpp R5G5B5A1. This is why if you want to help people reduce their file sizes as much as possible with tga_encoder, which you keep advocating for, then you need to make it as easy as possible to run it at the command line instead of keeping it as a library. Obviously you are mostly advocating for dynamic media uses like xmaps, but I think it is just as applicable to media file authors working on their assets outside Minetest.
erlehmann wrote:
Mon Jul 18, 2022 12:34
Sorry, without any example I consider this advice baseless and useless.
(about paletted tilesheets): Well then I'll just have to go and do it to prove a point then :)
erlehmann wrote:
Mon Jul 18, 2022 12:34
For subsampling, select “4:2:0 (chroma quartered)”. This further reduces the color resolution and should affect filesize.
Ok thanks for clarifying, I'll add it to the OP.
Blockhead wrote:
Mon Jul 18, 2022 04:42
Just a note, this was not about tracker music, but low-bitrate WAV files that can not be well compressed using Vorbis.
Oh I see. That does make a decent case for supporting WAV or other formats, although that could inflict many more filesize nightmares upon us than I can see well-thought-out usecases of it..
/˳˳_˳˳]_[˳˳_˳˳]_[˳˳_˳˳\ Advtrains enthusiast | My map: Noah's Railyard | My Content on ContentDB ✝️♂

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

Blockhead wrote:
Mon Jul 18, 2022 13:29
erlehmann wrote:
Mon Jul 18, 2022 12:25
However, an interesting case can be made to use TGA for minimizing the size of the zip file download of a mod:
My example code generates a 16×16 image bitmap and saves it as an uncompressed and compressed TGA.
I took these bitmaps and converted them to PNG and then used “optipng -O7 -strip all” to optimize the PNG.

The file sizes in bytes are:

...

No surprises here – PNG is a better choice for this particular 16×16 RGB image.

Well, the command “du -h” confirms that each of those files is using the same storage, 4k.
However, when I zipped the files (as if they were published as a mod), I got these sizes:

...

The difference is a whopping 50 bytes … you can get even larger differences when adding more files per zip file.
So while reported byte size favors PNG, this seems like a good case for distributing small textures as TGA if your download is zipped– which is exactly what game developers have been doing all the time for 20 years or so.

In xmaps I store the map image in item meta in TGA format, compressed using minetest.compress() (i.e. deflate).
(emphasis mine): A Transfer vs storage argument? Interesting, but as you said, a "whopping 50 bytes"... and really feels like just a moot point when you might have both forms - contentDB cache or manual downloaded archive - and the decompressed copy on disk. They practically seem to cancel out.
This is not a transfer vs storage argument. The storage requirement after decompression is actually the same for all the different texture files, as all the files are small enough to occupy only a single block. The size of the zip file will be smaller though – and thus the transfer size when downloaded will be too.

The main issue here is that compressing stuff that is already compressed is a bad idea. I did not expect it to be that bad – but it totally makes sense to me that compressing a bunch of uncompressed bitmaps saves a lot more space than trying to compress already compressed bitmaps.
Blockhead wrote:
Mon Jul 18, 2022 13:29
I feel like we should be moving to something like zstd (.tar.zstd) compression by the way.. but of course that's nowhere near as portable and not natively extractable on older operating systems without new software. Would it matter if it were just within ContentDB and the client, and ContentDB's web interface still served ZIP?
Please elaborate what exactly you expect from Zstd. I have already investigated it, but want to know why you think it is useful before I post what I found out.
Blockhead wrote:
Mon Jul 18, 2022 13:29
erlehmann wrote:
Mon Jul 18, 2022 12:25
Blockhead wrote:
Mon Jul 18, 2022 04:42
Most people author art assets solely in their image editor -snip -
There is no complicated conversion needed – a lot of painting programs like GIMP and mtpaint support TGA natively.
Unfortunately they don't support exporting with the features that tga_encoder has like 16bpp R5G5B5A1. This is why if you want to help people reduce their file sizes as much as possible with tga_encoder, which you keep advocating for, then you need to make it as easy as possible to run it at the command line instead of keeping it as a library. Obviously you are mostly advocating for dynamic media uses like xmaps, but I think it is just as applicable to media file authors working on their assets outside Minetest.
Indeed, I had not considered that. Granted, image editors often do not support all options for other formats either, which probably has lead to the development of postprocessing tools like optipng.
Blockhead wrote:
Mon Jul 18, 2022 13:29
erlehmann wrote:
Mon Jul 18, 2022 12:34
Sorry, without any example I consider this advice baseless and useless.
(about paletted tilesheets): Well then I'll just have to go and do it to prove a point then :)
Yes, please!
Blockhead wrote:
Mon Jul 18, 2022 13:29
Blockhead wrote:
Mon Jul 18, 2022 04:42
Just a note, this was not about tracker music, but low-bitrate WAV files that can not be well compressed using Vorbis.
Oh I see. That does make a decent case for supporting WAV or other formats, although that could inflict many more filesize nightmares upon us than I can see well-thought-out usecases of it..
You are probably right. MIDI and/or some simple tracker format could be nice though, to save on filesize.
Last edited by erlehmann on Mon Jul 18, 2022 13:57, edited 2 times in total.
cdb_b9da8bbc6338

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

[deleted because I quoted instead of editing]
cdb_b9da8bbc6338

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

erlehmann wrote:
Mon Jul 18, 2022 12:58
Since you want cold hard truth, I made some estimations:

Any square bitmap will always fit in a filesystem block encoded as a TGA, if …

• … it is uncompressed, monochrome and is of size 63×63 or less.
• … it is colormapped with16bpp colors (1 bit A, 5 bit for each of R/G/B) and is of size 59×59 or less.
• … it is colormapped with 24bpp colors (8 bit for each of R/G/B) and is of size 57×57 or less.
* … it is colormapped with 32bpp colors (8 bit for each of R/G/B/A and is of size 55×55 or less.
• … it is uncompressed with16bpp colors and is of size 45×45 or less.
• … it is uncompressed with 24bpp colors and is of size 36×36 or less.
• … it is uncompressed with 32bpp colors and is of size 31×31 or less.

At those sizes, I expect most images to compress much better using a PNG encoder, but I am not sure how bad minetest.encode_png() is and I do not care enough to find out – as I said before, for larger textures TGA is used despite its filesize and because of other reasons.
I have confimed using the minetest_game default mod that storing textures below these limits as uncompressed TGA does not increase disk space requirements (since each of them requires a single block regardless), but doing so can reduce the size of the resulting zip file storing the mod. This only works with raw files, not with RLE compressed files.

I suspect that the reason is that many highly compressed files lead to a situation where the zip file encoder is not able to exploit similarities between different files. Alsoo, the PNG overhead is not the same for every file, as it can occur in different positions and some of it is checksums that are different every time.

Therefore, my advice to reduce mod archive size without increasing storage size would be to store textures below the stated limits as uncompressed TGA files unless the file exceeds 4kb, then use PNG or JPEG.

I strongly suggest you try this out yourself to verify my findings.

Edit: This would also mean that optipng only makes sense for larger textures or tilesheets.

Edit 2: sfan5 pointed out in #minetest that this effect can not rely on redundancies between files, as files in zips are compressed separately. You can easily verify that by putting two identical files in a zip file.
cdb_b9da8bbc6338

erlehmann
Member
Posts: 62
Joined: Mon Jan 31, 2022 00:41
GitHub: erlehmann
IRC: erlehmann

Re: Optimising mods for size

by erlehmann » Post

I have made a script to test my assertion of that uncompressed bitmaps ultimately compress better in a zip.

I seriously wonder if I have missed anything important here.

Code: Select all

minetest/textures$ ./compare-compression.sh soothing32/default
968K	soothing32/default
968K	soothing32/default_tga
84K	soothing32/default.zip
80K	soothing32/default_tga.zip
85779 soothing32/default.zip
79464 soothing32/default_tga.zip

minetest/textures$ ./compare-compression.sh ambiguity
2.5M	ambiguity
2.4M	ambiguity_tga
1.1M	ambiguity.zip
1012K	ambiguity_tga.zip
1066520 ambiguity.zip
1032542 ambiguity_tga.zip

minetest/textures$ cat ./compare-compression.sh
#!/bin/sh
set -eu
LANG=C

PNG_DIR=$1
test -e "$PNG_DIR"

TGA_DIR="$PNG_DIR"_tga
mkdir -p -- "$TGA_DIR"

cp "$PNG_DIR"/*.png "$TGA_DIR"
for FILE in "$TGA_DIR"/*.png; do
 PREFIX=${FILE%.png}
 convert "$PREFIX".png "$PREFIX".tga  # add “-depth 5” as 2nd arg for A1R5G5B5
 # add optional footer that minetest requires
 printf '\0\0\0\0\0\0\0\0TRUEVISION-XFILE.\0' >> "$PREFIX".tga
 PNG_BS=$(
  du --block-size 4k -- "$PREFIX".png \
  |cut -f1
 )
 TGA_BS=$(
  du --block-size 4k -- "$PREFIX".tga \
  |cut -f1
 )
 if [ "$TGA_BS" -gt "$PNG_BS" ]; then
  unlink -- "$PREFIX".tga
 else
  unlink -- "$PREFIX".png
 fi
done

PNG_ZIP="$PNG_DIR".zip
TGA_ZIP="$TGA_DIR".zip

unlink -- "$PNG_ZIP" || :
unlink -- "$TGA_ZIP" || :

>/dev/null zip -9 "$PNG_ZIP" "$PNG_DIR"/*
>/dev/null zip -9 "$TGA_ZIP" "$TGA_DIR"/*

du -ch -- "$PNG_DIR"/* |grep "total$" |sed "s%total%$PNG_DIR%"
du -ch -- "$TGA_DIR"/* |grep "total$" |sed "s%total%$TGA_DIR%"

du -h -- "$PNG_ZIP"
du -h -- "$TGA_ZIP"

wc -c -- "$PNG_ZIP"
wc -c -- "$TGA_ZIP"
cdb_b9da8bbc6338

Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests