For reasons, I wanted a sky texture that had approximately correct stars — it's not uncommon for people to use random textures for stars.

Fortunately the data that we need for this is readily available. Not knowing any better, I chose the Yale Bright Star Catalog, which lists all stars of magnitude 6.5 or brighter. This contains 9,110 objects

I found this at WSCTools Catalogs

The stars then need to be rendered into an image in a format suitable for use as a texture. I chose to write this in Zig because I'm using it in a zig project, and this can be included simply in the build process for the larger project.

Reading the star catalog

The first thing we need to do is read the star catalog. Fortunately the format is fairly well documented.


The header is defined as…

The first 28 bytes of both BSC5 and BSC5ra contain the following information:

type name value description
Integer*4 STAR0 0 Subtract from star number to get sequence number
Integer*4 STAR1 1 First star number in file
Integer*4 STARN 9110 Number of stars in file
Integer*4 STNUM 1 0 if no star i.d. numbers are present; 1 if star i.d. numbers are in catalog file; 2 if star i.d. numbers are in file
Logical*4 MPROP 1 1 if proper motion is included; 0 if no proper motion is included
Integer*4 NMAG -1 Number of magnitudes present (-1=J2000 instead of B1950)
Integer*4 NBENT 32 Number of bytes per star entry
/// The first 28 bytes of both BSC5 and BSC5ra contain the following information:
const CatalogHeader = packed struct {
    /// Integer*4 STAR0=0     Subtract from star number to get sequence number
    star0: i32,
    /// Integer*4 STAR1=1     First star number in file
    star1: i32,
    /// Integer*4 STARN=9110  Number of stars in file
    starn: i32,
    /// Integer*4 STNUM=1     0 if no star i.d. numbers are present
    ///                       1 if star i.d. numbers are in catalog file
    ///                       2 if star i.d. numbers are  in file
    stnum: i32,
    /// Logical*4 MPROP=1     1 if proper motion is included
    ///                       0 if no proper motion is included
    mprop: i32,
    /// Integer*4 NMAG=-1     Number of magnitudes present (-1=J2000 instead of B1950)
    nmag: i32,
    /// Integer*4 NBENT=32    Number of bytes per star entry
    nbent: i32,


The entry is defined as…

Each catalog entry in BSC5 and BSC5ra contains 32 bytes with the following information:

type name description
Real*4 XNO Catalog number of star
Real*8 SRA0 B1950 Right Ascension (radians)
Real*8 SDEC0 B1950 Declination (radians)
Character*2 IS Spectral type (2 characters)
Integer*2 MAG V Magnitude * 100
Real*4 XRPM R.A. proper motion (radians per year)
Real*4 XDPM Dec. proper motion (radians per year)
/// Each catalog entry in BSC5 and BSC5ra contains 32 bytes with the following
/// information:
const CatalogEntry = packed struct {
    /// Catalog number of star
    xno: f32,
    /// B1950 Right Ascension (radians)
    sra0: f64,
    /// B1950 Declination (radians)
    sdec0: f64,
    /// Spectral type (2 characters)
    is: u16,
    /// V Magnitude * 100
    mag: i16,
    /// R.A. proper motion (radians per year)
    xrpm: f32,
    /// Dec. proper motion (radians per year)
    xdpm: f32,

The apparent magnitude is provided as an unsigned integer scaled by 100, so this is a utility to convert that to an f32

    /// Get the apparent magnitude as a floating point value.
    pub fn getMagnitude(self: CatalogEntry) f32 {
        const fmag: f32 = @floatFromInt(self.mag);
        return fmag / 100;

Standard formatter for the catalog entry, this should format a record; for example, the first entry should appear as…

0001 +01:17:28.50 +45:13:45.00 6.70 A1 -00:00:0.01 -00:00:0.02

    pub fn format(self: CatalogEntry, fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype) !void {
        _ = fmt;
        _ = options;

        const star_type = [_]u8{
            @truncate( >> 8),

        try writer.print("{d:0>4} ", .{
            @as(u16, @intFromFloat(self.xno)),
        try formatDegMinSec(writer, f64, self.sra0);
        try writer.print(" ", .{});
        try formatDegMinSec(writer, f64, self.sdec0);
        try writer.print(" {d:0.2} {s} ", .{
        try formatDegMinSec(writer, f32, self.xrpm);
        try writer.print(" ", .{});
        try formatDegMinSec(writer, f32, self.xdpm);

formatDegMinSec is a procedure to pretty print an angle in radians as degrees, minutes, and seconds. This is used by the catalog entry formatter above. This allows us to compare the output of catalog entries to the samples given at the source. It takes the comptime type of the input value because there are two different float types used in the catalog entry f64 and f32

fn formatDegMinSec(writer: anytype, comptime T: type, value: T) !void {
    const value_deg = std.math.radiansToDegrees(T, value);
    const deg = @trunc(value_deg);
    const rem = 60 * (if (value_deg >= 0) value_deg - deg else deg - value_deg);
    const sign = if (value_deg >= 0) "+" else "-";
    const min = @trunc(rem);
    const sec = 60 * (rem - min);

    try writer.print("{s}{d:0>2}:{d:0>2}:{d:0>2.2}", .{
        @abs(@as(i8, @intFromFloat(deg))),
        @abs(@as(i8, @intFromFloat(min))),

Reading/processing the catalog

This simply reads the header, reads STARN entries, then prints the first and last entries from the catalog.

The one gotcha is that the call to readStruct(CatalogHeader) will read 32 bytes from the reader, when the actual defined length of the CatalogHeader is 28 bytes. So it is necessary to seek to the correct offset before reading each catalog entry. Fortunately the catalog entry size is 32 bytes so it isn't necessary for those reads.

Once the file pointer is in the correct place we switch to using a bufferedReader which is a massive performance upgrade for sequential reads like this.

This returns an array of CatalogEntry

fn processCatalog(file: std.fs.File, allocator: Allocator) ![]CatalogEntry {
    const header = try file.reader().readStruct(CatalogHeader);
    std.debug.assert(header.nbent == @sizeOf(CatalogEntry));

    try file.seekTo(28);
    var bufferedReader =;
    const reader = bufferedReader.reader();

    const n = @abs(header.starn);
    var stars = try allocator.alloc(CatalogEntry, @intCast(n));

    for (0..n) |i| {
        stars[i] = try reader.readStruct(CatalogEntry);

    const out =;
    try out.print("First and last stars in the catalog:\n", .{});
    try out.print("{}\n", .{stars[0]});
    try out.print("{}\n", .{stars[n - 1]});

    return stars;

Generating the Texture Image

Textures for spheres typically use either a cubemap or an equirectangular 2D texture. For many applications a cubemap is better because there is less distortion, however computing the cubemap for this is a little more complicated, so I went with an equirectangular texture.


The star catalog gives each star a coordinate in Right Ascension(RA or α) and Declination(dec or δ), which are equivalent to longitude and latitude respectively.

To make the texture we need to map these into a (X, Y) coordinates. These are both simple linear mappings, using \(w, h\) for the width and height of the texture respectively

\begin{equation} x \leftarrow w - \frac{\alpha \cdot w}{2 \pi} , y \leftarrow \frac{\left( \pi/2 - \delta \right) \cdot h}{\pi} \end{equation}

To make conversion simpler in code, \(y\) is re-arranged as

\begin{equation} y \leftarrow \frac{h}{2} - \delta \cdot \frac{h}{\pi} \end{equation}

with the inverse

\begin{equation} \delta \leftarrow \left(\frac{h}{2} - y \right) \frac{\pi}{h} \end{equation}

To ensure that these coordinate transformations are consistent, and to keep the code a modicum cleaner, there is a helper struct which performs these transformations.

The init functions simply sets up the constants used for all conversions.

const ImageCoordHelper = struct {
    width: usize,
    height: usize,
    fwidth: f64,
    fheight: f64,
    wconv: f64,
    hconv: f64,
    h2: f64,

    const Self = @This();

    pub fn init(width: usize, height: usize) Self {
        const fwidth: f64 = @floatFromInt(width);
        const fheight: f64 = @floatFromInt(height);
        return .{
            .width = width,
            .height = height,
            .fwidth = fwidth,
            .fheight = fheight,
            .wconv = fwidth / (2 * std.math.pi),
            .hconv = fheight / std.math.pi,
            .h2 = fheight * 0.5,

These methods then create

  • x coordinate values from Right Ascension
    pub fn xFromRa(self: Self, ra: f64) f64 {
        return self.fwidth - ra * self.wconv;
  • y coordinate values (referred to as a row when returned as a usize) from Declination
    pub fn yFromDec(self: Self, dec: f64) f64 {
        return self.h2 - dec * self.hconv;

    pub fn rowFromDec(self: Self, dec: f64) usize {
        return @intFromFloat(@round(self.h2 - dec * self.hconv));
  • and the inverse, a declination from a row index.
    pub fn decFromRow(self: Self, row: usize) f64 {
        const y: f64 = @floatFromInt(row);
        return (self.h2 - y) / self.hconv;

Magnitude ↔ Brightness

The only other element in the catalog entry for each star that is used is the apparent magnitude. As encoded in the catalog, this uses the standard scale used in astronomy, which is reverse logarithmic.

I'm using a slightly arbitrary formula to get a linear brightness value \(b\) from the magnitude \(m\)

$$ b \leftarrow \frac{4096}{10^{0.4m}} $$

The catalog has stars with magnitudes between -1.46 (Sirius) and 6.5 — the faintest stars visible with the naked eye. This table shows how the apparent magnitude converts to the linear brightness scale used in the code, and for reference, how many stars are brighter

Apparent magnitude Brightness relative to Vega Number of brighter stars b
-1.46 385% 1 15717
-1.0 251% 1 10289
0.0 100% 4 4096
1.0 40% 15 1631
2.0 16% 48 649
3.0 6.3% 171 258
4.0 2.5% 513 103
5.0 1.0% 1,602 41
6.0 0.4% 4,800 16
6.5 0.25% 9,100 10

Processing the stars

Each star is processed in turn, the Right ascension and Declination are converted to image coordinates. The x coordinate is then scaled by the cosine of the declination. This is used to compensate for the stretching of the texture as it is applied to a sphere. As we will see below, this is then stretched out to the entire width of the texture on a row by row basis.

fn processStars(stars: []const CatalogEntry, width: usize, height: usize, buffer: []u8, allocator: Allocator) !void {
    const ich = ImageCoordHelper.init(width, height);

    for (stars) |star| {
        const cos_star_decl = @cos(star.sdec0);

        const x = ich.xFromRa(star.sra0);
        const y = ich.rowFromDec(star.sdec0);

        // The column for this star scaled by cos \delta
        const sx: usize = @intFromFloat(@round(x * cos_star_decl));

        const offset = y * width + sx;
        if (offset > buffer.len) {

The magnitude is then converted to the linear scale as described above.

        const magnitude = star.getMagnitude();
        const brightness = 4096 / std.math.pow(f32, 10, 0.4 * magnitude);

The range of possible brightness values (10 – 15,717), is problematic, I could possibly use a 16-bit per channel image format, but that isn't universally supported for textures, and I could encode the brightness using a logarithmic scale, but working in the linear domain makes adding brightnesses, e.g. where there are binary stars in the same pixel, much easier. The solution that I have used is to render stars different sizes based on their brightness values.

  • For stars with a brightness \(b \ge 1,020\), I use a 5×5 gaussian kernel
const kernel5x5 = [5][5]f32{
    [_]f32{ 1, 4, 7, 4, 1 },
    [_]f32{ 4, 16, 26, 16, 4 },
    [_]f32{ 7, 26, 41, 26, 7 },
    [_]f32{ 4, 16, 26, 16, 4 },
    [_]f32{ 1, 4, 7, 4, 1 },

const kernel5x5Weight: f32 = 273;

  • For stars with a brightness \(255 \le b \lt 1,020\), I used a 3×3 gaussian kernel
const kernel3x3 = [3][3]f32{
    [_]f32{ 1, 2, 1 },
    [_]f32{ 2, 4, 2 },
    [_]f32{ 1, 2, 1 },

const kernel3x3Weight: f32 = 16;

  • For stars with a brightness \(b \lt 255\), I use a single pixel.

This distributes the brightness over a larger area, but the total brightness of each star is maintained.

Applying the 5×5 kernel follows a typical structure for this kind of thing, with two nested for loops, for rows and columns.

We do need to do a couple of things differently to compensate for the stretching by \(\cos \delta\)

        if (@round(brightness) >= 1020) {
            const unit = @min(32, brightness / kernel5x5Weight);
            for (0..5) |row| {
                const target_row = y + row - 2;
                if (target_row < 0 or target_row >= height)

We get the declination for this specific row, and then compute the central x position — row_sx — scaled appropriately, and the total length of this row — row_ssx.

                const decl = ich.decFromRow(target_row);
                const cos_decl = @cos(decl);
                const row_sx: usize = @intFromFloat(@round(x * cos_decl));
                const row_ssx: usize = @intFromFloat(@round(ich.fwidth * cos_decl));

Then in the inner column loop row_sx is used to offset the position correctly, and row_ssx is used to handle the literal edge cases where a star has a Right Ascension close to 0 or 2π

                const row_offset = target_row * width;
                for (0..5) |col| {
                    const col_offset = (row_sx + col - 2 + row_ssx) % row_ssx;
                    const value = @round(unit * kernel5x5[row][col]);
                    setAt(@intFromFloat(value), row_offset + col_offset, buffer);

Exactly the same process is used for the 3×3 kernel.

        } else if (@round(brightness) >= 255) {
            const unit = brightness / kernel3x3Weight;
            for (0..3) |row| {
                const target_row = y + row - 1;
                if (target_row < 0 or target_row >= height)

                const decl = ich.decFromRow(target_row);
                const cos_decl = @cos(decl);
                const row_sx: usize = @intFromFloat(@round(x * cos_decl));
                const row_ssx: usize = @intFromFloat(@round(ich.fwidth * cos_decl));

                const row_offset = target_row * width;
                for (0..3) |col| {
                    const col_offset = (row_sx + col - 1 + row_ssx) % row_ssx;
                    const value = @round(unit * kernel3x3[row][col]);
                    setAt(@intFromFloat(value), row_offset + col_offset, buffer);
        } else {

And single pixel stars are simply set into the buffer.

            setAt(@intFromFloat(@round(brightness)), offset, buffer);

We then stretch the image, each row is stretched from being \(\cos \delta \cdot \text(width)\) to the full width of the image.

A gamma "correction" is then applied to bring the texture approximately into sRGB

    try stretchBuffer(ich, buffer, allocator);

    var i: usize = 0;
    const gamma: f32 = 1.0 / 2.2;
    while (i < buffer.len) : (i += 1) {
        const linearValue = @as(f32, @floatFromInt(buffer[i])) / 255;
        const gammaValue = std.math.pow(f32, linearValue, gamma);
        setAt(@intFromFloat(@round(255 * gammaValue)), i, buffer);

Stretch Buffer

When the buffer was initially created, the stars were drawn such that the x-coordinates were scaled by \(\cos \delta\). This method then stretches each row to fill the entire width of the image. Then when texture mapping is done, the stars will look (mostly) undistorted regardless of their coordinates.

fn stretchBuffer(ich: ImageCoordHelper, buffer: []u8, allocator: Allocator) !void {
    var row = try allocator.alloc(u8, ich.width);

For each row we calculate

  • dest_offset — The offset in the buffer to the start of the current row
  • decl — The declination for this row
  • ssx — The scaled width for this row, the row will be scaled from this width to the image width
  • scale — The scale factor implied by ssx and ich.fwidth
    for (0..ich.height) |y| {
        const dest_offset = y * ich.width;
        const decl = ich.decFromRow(y);
        const ssx = @round(ich.fwidth * @cos(decl));
        if (ssx < 1) {
        const scale = (ssx - 1) / (ich.fwidth - 1);

        @memset(row, 0);

For each pixel in the row, we then compute its value by finding the corresponding floating point coordinate in the scaled row and use linear interpolation to calculate the value for the stretched pixel.

  • dest_x — the target x coordinate as an f64
  • src_x — the scaled source x coordinate
  • trunc_src_x_ — the integer portion of src_x
  • fract — the fractional part of src_x
  • lowtrunc_src_x as an integer
  • high — the next x coordinate after low, wrapping around to 0 when necessary
        // stretch from ssx to width
        for (0..ich.width) |ix| {
            const dest_x: f64 = @floatFromInt(ix);
            const src_x = dest_x * scale;
            const trunc_src_x = @trunc(src_x);
            const fract = src_x - trunc_src_x;
            const low: usize = @intFromFloat(trunc_src_x);
            const high = (low + 1) % ich.width;

We then linear interpolate the pixel values at low and high, using fract as the interpolation \(t\) value

            const value = std.math.lerp(
                @as(f64, @floatFromInt(buffer[dest_offset + low])),
                @as(f64, @floatFromInt(buffer[dest_offset + high])),
            setAt(@intFromFloat(@round(value)), @intFromFloat(dest_x), row);

The stretched buffer is then copied back over the current row

        std.mem.copy(u8, buffer[dest_offset..], row);

Setting a Pixel

Both processStars and stretchBuffer use this helper method to set a pixel value in a buffer.

This adds the supplied value to the current value in the buffer at offset, and prevents overflow.

fn setAt(value: usize, offset: usize, buffer: []u8) void {
    const curr = @as(usize, @intCast(buffer[offset]));
    buffer[offset] = @as(u8, @intCast(@min(255, curr + value)));


For the sake of simplicity the primary output of this program is a PGM (Portable GrayMap) which has the advantage of being embarrassingly easy to write, and is well supported by common unix graphics utilities.

fn writeOutput(file: std.fs.File, width: usize, height: usize, buffer: []u8) !void {
    var bufferedWriter =;
    const writer = bufferedWriter.writer();

    try writer.print("P5\n{d} {d}\n255\n", .{ width, height });
    _ = try writer.write(buffer);
    try bufferedWriter.flush();

Converting Output

If the convert utility is available — this is a script provided as part of the ImageMagick software suite — then it can be called to convert the PGM file to a more appropriate file format.

fn convertToImg(allocator: Allocator, output_path: []const u8, img_path: []const u8) !void {
    const out =;
    try out.print("Converting {s} to {s}\n", .{ output_path, img_path });

    const convert = "convert";
    const convert_args = [_][]const u8{
    var result = try std.ChildProcess.exec(.{
        .allocator = allocator,
        .argv = &convert_args,
    if (result.term.Exited != 0) {

main() - Putting it all together

pub fn main() !void {
    const out =;
    var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
    defer arena.deinit();
    const allocator = arena.allocator();

Process command line arguments. This is fairly simplistic processing, there is one option --height, two required positional arguments CATALOG and PGM, for the catalog file and output pgm file respectively, and an optional positional argument IMG for a path to an image file to which the PGM should be converted.

    var args = try std.process.argsWithAllocator(allocator);
    defer args.deinit();

    var catalog: ?[]const u8 = null;
    var output: ?[]const u8 = null;
    var img: ?[]const u8 = null;
    var height: usize = 2048;
    var get_height = false;

    _ =;
    while ( |arg| {
        if (std.mem.eql(u8, "--help", arg)) {
        } else if (std.mem.eql(u8, "--version", arg)) {
        } else if (get_height) {
            height = std.fmt.parseInt(usize, arg, 0) catch {
                std.debug.print("Error: '{s}' isn't a valid height\n", .{arg});
            get_height = false;
        } else if (std.mem.eql(u8, "--height", arg)) {
            get_height = true;
        } else if (catalog == null) {
            catalog = arg;
        } else if (output == null) {
            output = arg;
        } else if (img == null) {
            img = arg;
        } else {
    if (catalog == null or output == null or get_height) {

Attempt to create a path to the catalog file, the catalog.? is safe here because we just checked that catalog is not null.

    const catalog_path = fs.cwd().realpathAlloc(allocator, catalog.?) catch |err| {
        std.debug.print("Error: {any}\n  '{s}' is an invalid path\n", .{ err, catalog.? });

Resolve the output path, this doesn't use realpathAlloc because the file may not exist at this point.

    const pwd = fs.cwd().realpathAlloc(allocator, ".") catch unreachable;
    const output_path = fs.path.resolve(allocator, &[_][]const u8{ pwd, output.? }) catch |err| {
        std.debug.print("Error: {any}\n  '{s}' is an invalid path\n", .{ err, output.? });

In a similar fashion, resolve the image path if one was provided.

    var img_path: ?[]u8 = null;
    if (img) |img_arg| {
        img_path = fs.path.resolve(allocator, &[_][]const u8{ pwd, img_arg }) catch |err| {
            std.debug.print("Error: {any}\n  '{s}' is an invalid path\n", .{ err, img_arg });

Open the catalog file, create the output file, and allocate the working buffer for the image based on the height. The width is always 2×height because the range of right ascension is 2π and the range of declination is π.

    try out.print("Stars, using catalog '{s}'\n", .{catalog_path});

    const file = std.fs.openFileAbsolute(catalog_path, .{}) catch |err| {
        std.debug.print("Error: {any}\n  Cannot open '{s}'\n", .{ err, catalog_path });
    defer file.close();

    const output_file = std.fs.createFileAbsolute(output_path, .{}) catch |err| {
        std.debug.print("Error: {any}\n  Cannot create '{s}'\n", .{ err, output_path });
    defer output_file.close();

    const width = 2 * height;

    var grayBuffer = allocator.alloc(u8, width * height) catch {
        std.debug.print("Unable to allocate memory for output", .{});
    @memset(grayBuffer, 0);

  • Process the catalog
    const stars = processCatalog(file, allocator) catch |err| {
        std.debug.print("Error: {any}\n  Processing '{s}'\n", .{ err, catalog_path });
  • Process stars to create image
    processStars(stars, width, height, grayBuffer, allocator) catch |err| {
        std.debug.print("Error: {any}\n  Processing stars", .{err});

  • Write the PGM file
    writeOutput(output_file, width, height, grayBuffer) catch |err| {
        std.debug.print("Error: {any}\n  writing '{s}'\n", .{ err, output_path });

  • If an image path was provided, convert the PGM file to another format.
    if (img_path) |converted_path| {
        convertToImg(allocator, output_path, converted_path) catch |err| {
            std.debug.print("Error: {any}\n  converting '{s}' to '{s}'\n", .{


Prints the usage message and exits with the supplied exit code. This is called by argument processing code.

fn usage(exitCode: u8) noreturn {
    const out =;
    const usage_msg =
        \\Usage: stars [options] <CATALOG> <PGM OUTPUT> [<IMG OUTPUT>]
        \\  CATALOG     Path to the Yale Bright Star Catalog
        \\  PGM         Path to a pgm file to be generated
        \\  IMG         Optional path to a file into which the PGM file will be
        \\              converted. The file type will be determined by the
        \\              extension.
        \\  --height    The height of the output image, note the width is
        \\              always twice the height. (default is 2048)
        \\  --help      Show this usage information and exit.
        \\  --version   Print a version number and exit.
    out.print(usage_msg, .{}) catch unreachable;


Prints the current version and exits. This is called if the --version argument is passed.

fn version() noreturn {
    const out =;
    out.print("Stars 0.1.0\n", .{}) catch unreachable;


All put together this gives us stars.zig

const std = @import("std");
const fs = std.fs;
const Allocator = std.mem.Allocator;



Run with default options this generates this star texture — feel free to use this as you see fit, it is entirely derived from publicly available data.

To see it in action check the demo page