File System

The File System utilities offer a way to organize and query file-system data in renoun. It is a powerful tool that allows you to define a schema for file exports and query those exports using a simple API.

To get started with the File System API, instantiate the Directory class to target a set of files and directories from the file system. You can then use the getEntry / getDirectory / getFile methods to query a specific descendant file or directory:

import { Directory } from 'renoun'

const posts = new Directory({
  path: 'posts',
  loader: {
    mdx: (path) => import(`./posts/${path}.mdx`),
  },
})

Here we are creating a new Directory instance that targets the posts directory relative to the working directory. We are also specifying a loader for the mdx file extension that will be used to load the file contents using the bundler.

Referencing path schemes

The Directory and File constructors accept path schemes that resolve using internal resolvers before the adapters interact with the file system. The only supported scheme currently is workspace:, which resolves paths relative to the workspace root instead of the current working directory.

This is helpful when using renoun from a nested package (for example apps/site) but you need to reference files located in another workspace directory (like examples):

import { Directory } from 'renoun'

const examples = new Directory({
  path: 'workspace:examples',
})

If the current working directory is apps/site, the example above resolves to ../../examples internally while remaining ./examples when run from the workspace root.

Querying file system entries

import { Directory } from 'renoun'

const posts = new Directory({
  path: 'posts',
  loader: {
    mdx: (path) => import(`./posts/${path}.mdx`),
  },
})

export default async function Page({ slug }: { slug: string }) {
  const post = await posts.getFile(slug, 'mdx')
  const Content = await post.getExportValue('default')

  return <Content />
}

You can query the entries within the directory to help with generating navigations and index pages. For example, we can filter to only mdx file extensions to generate an index page of links to all posts using the getEntries method:

import { Directory, withSchema } from 'renoun'

interface PostType {
  frontmatter: {
    title: string
    date: Date
  }
}

const posts = new Directory({
  path: 'posts',
  filter: '*.mdx',
  loader: {
    mdx: withSchema<PostType>((path) => import(`./posts/${path}.mdx`)),
  },
})

export default async function Page() {
  const allPosts = await posts.getEntries()

  return (
    <>
      <h1>Blog</h1>
      <ul>
        {allPosts.map(async (post) => {
          const pathname = post.getPathname()
          const frontmatter = await post.getExportValue('frontmatter')

          return (
            <li key={pathname}>
              <a href={pathname}>{frontmatter.title}</a>
            </li>
          )
        })}
      </ul>
    </>
  )
}

File selection criteria

When querying files using getFile or getEntry, the file system follows a specific priority order to resolve ambiguous paths:

  1. Sibling files over directories : When both a file and a directory exist with the same base name (e.g., integrations.mdx and integrations/ ), the sibling file is preferred over the directory. This ensures that getEntry('integrations') returns integrations.mdx rather than the integrations/ directory.
  2. Base files over files with modifiers : When multiple files share the same base name but have different modifiers (e.g., Reference.tsx and Reference.examples.tsx ), the base file without a modifier is preferred.
  3. Extension matching : When an extension is specified in the query (e.g., getFile('button', 'tsx') ), only files matching that extension are considered.
  4. Directory representatives : If a directory is selected, the file system looks for a representative file within that directory in this order:
    • A file with the same name as the directory (e.g., Button/Button.tsx )
    • An index file (e.g., Button/index.tsx )
    • A readme file (e.g., Button/readme.md )
import { Directory, MemoryFileSystem } from 'renoun'

const directory = new Directory({ path: 'posts' })

// When both integrations.mdx and integrations/ exist
// returns integrations.mdx (sibling file, not the directory)
const entry = await directory.getEntry('integrations')

// When both Reference.tsx and Reference.examples.tsx exist
// returns Reference.tsx (base file without modifier)
const file = await directory.getFile('Reference')

Type checking file exports

To improve type safety, you can utilize the withSchema helper to specify the schema for the file’s exports:

import { Directory, withSchema } from 'renoun'

interface PostType {
  frontmatter: {
    title: string
    date: Date
  }
}

const posts = new Directory({
  path: 'posts',
  loader: {
    mdx: withSchema<PostType>((path) => import(`./posts/${path}.mdx`)),
  },
})

Now when we call JavaScript#getExportValue and JavaScriptExport#getRuntimeValue we will have stronger type checking and autocomplete.

Working with globbed module maps

When using bundler utilities like import.meta.glob, you can return a loader map from the loader option to reuse the globbed modules:

/// <reference types="vite/client" />
import { Directory, withSchema } from 'renoun'

interface PostType {
  frontmatter: {
    title: string
    date: Date
  }
}

const posts = new Directory({
  path: 'posts',
  loader: () => {
    const mdxModules = import.meta.glob('./posts/**/*.mdx')
    return {
      mdx: withSchema<PostType>((path) => mdxModules[`./posts/${path}.mdx`]),
    }
  },
})

The loader factory executes once and returns the extension-to-loader map, so the globbed modules are reused across all files in the directory. If the glob returns loader functions themselves, they will automatically be called and awaited for you.

Schema Validation

You can also apply schema validation using libraries that follow the Standard Schema Spec like Zod, Valibot, or Arktype to ensure file exports conform to a specific schema:

import { Directory, withSchema } from 'renoun'
import { z } from 'zod'

const posts = new Directory({
  path: 'posts',
  loader: {
    mdx: withSchema(
      {
        frontmatter: z.object({
          title: z.string(),
          date: z.date(),
        }),
      },
      (path) => import(`./posts/${path}.mdx`)
    ),
  },
})

Alternatively, you can define a schema yourself using both TypeScript types and custom validation functions:

import { Directory, withSchema } from 'renoun'

interface PostType {
  frontmatter: {
    title: string
    date: Date
  }
}

const posts = new Directory({
  path: 'posts',
  loader: {
    mdx: withSchema<PostType>(
      {
        frontmatter: (value) => {
          if (typeof value.title !== 'string') {
            throw new Error('Title is required')
          }

          if (!(value.date instanceof Date)) {
            throw new Error('Date is required')
          }

          return value
        },
      },
      (path) => import(`./posts/${path}.mdx`)
    ),
  },
})

The file system utilities are not limited to MDX files and can be used with any file type. By organizing content and source code into structured collections, you can easily generate static pages and manage complex routing and navigations.

Streaming

File instances expose web streamable methods directly, slice, stream, arrayBuffer, and text—without buffering file contents. Content type and size are inferred from the file metadata (via the type / size accessors), so you can treat a File as a streaming Blob immediately.

Because these helpers mirror the standard Web implementations, they can be passed directly to APIs that expect a Blob or File while still streaming their contents in byte ranges. Below are common ways to use them:

If the underlying file system adapter cannot report a byte length, stream() falls back to the raw readable stream instead of throwing. In that case the size accessor and slice() remain unavailable because they rely on a known length, but you can still forward the stream directly.

Surface streaming views from renoun File instances

import { Directory, File, MemoryFileSystem } from 'renoun/file-system'

const directory = new Directory({
  fileSystem: new MemoryFileSystem({
    'video.mp4': new Uint8Array([1, 2, 3]),
  }),
})
const file = new File({ directory, path: 'video.mp4' })

// Work with slices directly on the File
const previewChunk = await file.slice(0, 1024 * 1024).arrayBuffer()
const type = file.type

Upload streaming files with fetch

Streaming files work with fetch (including in Node runtimes that support Web streams). Nothing is buffered before the request is sent:

import { Directory, File, MemoryFileSystem } from 'renoun/file-system'

const directory = new Directory({
  fileSystem: new MemoryFileSystem({
    'video.mp4': new Uint8Array([1, 2, 3]),
  }),
})
const file = await directory.getFile('video.mp4')

await fetch('https://example.com/upload', {
  method: 'PUT',
  body: file.slice(), // streamed as the body
  headers: { 'content-type': file.type },
})

Serve partial responses efficiently

You can return slices without reading the whole asset:

import { Directory, File } from 'renoun/file-system'

const directory = new Directory({
  path: 'workspace:public',
})

export async function GET(request: Request) {
  const rangeHeader = request.headers.get('range')
  const file = new File({ directory, path: 'video.mp4' })
  const { start = 0, end = file.size } = parseRange(rangeHeader)
  const slice = file.slice(start, end)

  return new Response(slice.stream(), {
    status: 206,
    headers: {
      'content-type': file.type,
      'content-range': `bytes ${start}-${end - 1}/${file.size}`,
    },
  })
}

function parseRange(header: string | null) {
  if (!header?.startsWith('bytes=')) return {}
  const [start, end] = header.replace('bytes=', '').split('-').map(Number)
  return { start, end: Number.isFinite(end) ? end + 1 : undefined }
}

Using GitHostFileSystem

The GitHostFileSystem adapter lets you mirror files from a remote Git provider into memory so they can be queried just like local entries. It accepts the repository coordinates and optional filters and then streams the tarball for the requested ref.

import { Directory, GitHostFileSystem } from 'renoun'

const repoFs = new GitHostFileSystem({
  repository: 'souporserious/renoun',
  ref: 'main',
})

const docs = new Directory({
  path: '.',
  fileSystem: repoFs,
})

Avoiding rate limits

Git providers apply very small anonymous rate limits (for example, GitHub only allows 60 unauthenticated requests per hour). Passing an access token to GitHostFileSystem raises those limits dramatically and unlocks the more efficient metadata paths that renoun uses internally.

Provide the token via the token option, ideally by reading from an environment variable so it is not committed to source control:

import { GitHostFileSystem } from 'renoun'

const repoFs = new GitHostFileSystem({
  repository: 'souporserious/renoun',
  token: process.env.GITHUB_TOKEN,
})

When a token is supplied, renoun batches GraphQL blame queries and reuses cached metadata to keep requests to a minimum. Without a token the file system falls back to higher-volume REST sampling, so authenticated requests are the easiest way to prevent rate limiting during development and CI.

Inspecting packages with Package

The file system utilities also include a Package helper that can discover workspace packages, resolve the closest node_modules installation, or fall back to repositories fetched through GitHostFileSystem. Once discovered, the package manifest is analyzed so you can introspect exports and imports without manually walking the file tree.

import { Package } from 'renoun'

const renounMdx = new Package({
  name: '@renoun/mdx',
  loader: {
    'remark/add-headings': () => import('@renoun/mdx/remark/add-headings'),
  },
})
const remarkAddHeadings = await renounMdx.getExport('remark/add-headings')
const defaultExport = await remarkAddHeadings.getExport('default')

await defaultExport.getType()
await defaultExport.getValue()
import { Package } from 'renoun'

const renounMdx = new Package({
  name: '@renoun/mdx',
  loader: {
    'remark/add-headings': () => import('@renoun/mdx/remark/add-headings'),
  },
})
const remarkAddHeadings = await renounMdx.getExport('remark/add-headings')
const defaultExport = await remarkAddHeadings.getExport('default')
const type = await defaultExport.getType()
const value = await defaultExport.getValue()

Each export directory tracks the manifest conditions, target file path, and helper methods for loading runtime values. The same API works for getImports() when you need to trace how a package consumes other modules, making it straightforward to build documentation or automated analysis around any published or workspace package.

API Reference