ui-lab.app
React component library and design system with accessible, themeable components.

I design and build digital products with a focus on great UI — tools and interfaces that are clear, useful, and enjoyable to use.
My work blends design and development, especially around UI systems and developer-facing products. See more on my GitHub , including ui-lab.
KyzaMain Projects
Side Projects
Neovim plugin for speech-to-text transcription via Whisper API or local model.
vocal.nvim is a lightweight Neovim plugin that enables speech-to-text transcription directly within the editor. It allows users to record audio through their microphone and automatically transcribe it into text using either OpenAI's Whisper API or local Whisper models. The transcribed text is then inserted at the cursor position or replaces selected text in the buffer.
The plugin is designed to enhance text composition workflows by enabling hands-free dictation capabilities within Neovim, making it ideal for drafting documents, taking notes, or composing text without typing.
:Vocal command or configured keymaps (default: <leader>v)require("vocal").setup({
local_model = {
model = "base",
path = "~/whisper",
},
})
Local model transcription is the default and works without an API key. Swap to the API by setting api_key and omitting local_model.
The gap between "record audio" and "get clean text in the buffer" is bigger than it looks. Sox handles recording fine, but getting asynchronous Whisper inference — especially the local Python subprocess — to report back to Neovim without blocking required careful use of vim.loop and job control. The visual-mode replace path also needed separate handling since nvim_buf_set_text behaves differently with active selections.
openai-whisper Python package