Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ case "$OSTYPE" in
esac

# mingw64-specific linker options
c_std=""
windows_linker=""
unameOut="$(uname -s)"
case "$unameOut" in
Expand All @@ -37,9 +38,12 @@ for arg in "$@"; do
exit 1
fi
;;
--strict-c89)
c_std="-std=c89 -pedantic -Werror=declaration-after-statement"
Copy link
Copy Markdown

@robertkirkman robertkirkman Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to resolve the concern that p2r3 had about the default for most people needing to be C89 so that all future code is verified to be C89-compatible, if that's desired, I could suggest adding these arguments to the invocation of cosmocc that's currently in the GitHub Actions workflow of this repository, so that the official cosmocc target that's tested in GitHub Actions here would be able to fail the CI and print relevant errors in PRs that would accidentally add C89-incompatible code,

but that's just my suggestion, not sure what p2r3 and other contributors would think of it. That idea would also be the most useful if GitHub Actions in this repo were eventually configured to test all PRs before merging and not only after merging, which it currently isn't, not sure if that's something p2r3 would think should be enabled or not.

on:
push:
branches:
- main
workflow_dispatch:

Copy link
Copy Markdown
Author

@techflashYT techflashYT Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with this is that, since C89 doesn't have inline, it'll technically produce less efficient code. I hack around this with a bunch of #ifdef's to determine if the platform has any way to do inline functions, and if not, just stub out the keyword. But, for real "production-grade" builds, one would probably want to have inline function support (for however much difference it makes, probably not that much). I'm not against test-building it in strict C89 mode to ensure that nothing is catastrophically broken, but I'd probably advise against making it the default for final builds.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh ok, I see, well in that case the solution is probably for there to be two builds in CI, an unoptimized C89-mode build (to test for C89 compatibility in CI) and an optimized C99+-mode build (to build any release binaries).

The project's code is very small and fast to compile in GitHub Actions, so two builds wouldn't bloat the CI time very much.

;;
esac
done

rm -f "bareiron$exe"
$compiler src/*.c -O3 -Iinclude -o "bareiron$exe" $windows_linker
$compiler src/*.c $c_std -O3 -Iinclude -o "bareiron$exe" $windows_linker
"./bareiron$exe"
30 changes: 15 additions & 15 deletions build_registries.js
Original file line number Diff line number Diff line change
Expand Up @@ -411,25 +411,25 @@ async function convert () {
#include <stdint.h>
#include "registries.h"

// Binary contents of required "Registry Data" packets
/* Binary contents of required "Registry Data" packets */
uint8_t registries_bin[] = {
${toCArray(fullRegistryBuffer)}
};
// Binary contents of "Update Tags" packets
/* Binary contents of "Update Tags" packets */
uint8_t tags_bin[] = {
${toCArray(tagBuffer)}
};

// Block palette
/* Block palette */
uint16_t block_palette[] = { ${Object.values(itemsAndBlocks.palette).join(", ")} };
// Block palette as VarInt buffer
/* Block palette as VarInt buffer */
uint8_t network_block_palette[] = {
${toCArray(networkBlockPalette)}
};

// Block-to-item mapping
/* Block-to-item mapping */
uint16_t B_to_I[] = { ${itemsAndBlocks.mappingWithOverrides.join(", ")} };
// Item-to-block mapping
/* Item-to-block mapping */
uint8_t I_to_B (uint16_t item) {
switch (item) {
${itemsAndBlocks.mapping.map((c, i) => c ? `case ${c}: return ${i};\n ` : "").join("")}
Expand All @@ -445,25 +445,25 @@ uint8_t I_to_B (uint16_t item) {

#include <stdint.h>

// Binary packet data (${fullRegistryBuffer.length + tagBuffer.length} bytes total)
/* Binary packet data (${fullRegistryBuffer.length + tagBuffer.length} bytes total) */
extern uint8_t registries_bin[${fullRegistryBuffer.length}];
extern uint8_t tags_bin[${tagBuffer.length}];

extern uint16_t block_palette[256]; // Block palette
extern uint8_t network_block_palette[${networkBlockPalette.length}]; // Block palette as VarInt buffer
extern uint16_t B_to_I[256]; // Block-to-item mapping
uint8_t I_to_B (uint16_t item); // Item-to-block mapping
extern uint16_t block_palette[256]; /* Block palette */
extern uint8_t network_block_palette[${networkBlockPalette.length}]; /* Block palette as VarInt buffer */
extern uint16_t B_to_I[256]; /* Block-to-item mapping */
uint8_t I_to_B (uint16_t item); /* Item-to-block mapping */

// Block identifiers
/* Block identifiers */
${Object.keys(itemsAndBlocks.palette).map((c, i) => `#define B_${c} ${i}`).join("\n")}

// Item identifiers
/* Item identifiers */
${Object.entries(itemsAndBlocks.items).map(c => `#define I_${c[0]} ${c[1]}`).join("\n")}

// Biome identifiers
/* Biome identifiers */
${biomes.map((c, i) => `#define W_${c} ${i}`).join("\n")}

// Damage type identifiers
/* Damage type identifiers */
${registries["damage_type"].map((c, i) => `#define D_${c} ${i}`).join("\n")}

#endif
Expand Down
Loading