mirror of https://github.com/ollama/ollama
* add build to .dockerignore * test: only build one arch * add build to .gitignore * fix ccache path * filter amdgpu targets * only filter if autodetecting * Don't clobber gpu list for default runner This ensures the GPU specific environment variables are set properly * explicitly set CXX compiler for HIP * Update build_windows.ps1 This isn't complete, but is close. Dependencies are missing, and it only builds the "default" preset. * build: add ollama subdir * add .git to .dockerignore * docs: update development.md * update build_darwin.sh * remove unused scripts * llm: add cwd and build/lib/ollama to library paths * default DYLD_LIBRARY_PATH to LD_LIBRARY_PATH in runner on macOS * add additional cmake output vars for msvc * interim edits to make server detection logic work with dll directories like lib/ollama/cuda_v12 * remove unncessary filepath.Dir, cleanup * add hardware-specific directory to path * use absolute server path * build: linux arm * cmake install targets * remove unused files * ml: visit each library path once * build: skip cpu variants on arm * build: install cpu targets * build: fix workflow * shorter names * fix rocblas install * docs: clean up development.md * consistent build dir removal in development.md * silence -Wimplicit-function-declaration build warnings in ggml-cpu * update readme * update development readme * llm: update library lookup logic now that there is one runner (#8587) * tweak development.md * update docs * add windows cuda/rocm tests --------- Co-authored-by: jmorganca <jmorganca@gmail.com> Co-authored-by: Daniel Hiltgen <daniel@ollama.com> |
||
|---|---|---|
| .. | ||
| assets | ||
| src | ||
| .eslintrc.json | ||
| .gitignore | ||
| README.md | ||
| forge.config.ts | ||
| package-lock.json | ||
| package.json | ||
| postcss.config.js | ||
| tailwind.config.js | ||
| tsconfig.json | ||
| webpack.main.config.ts | ||
| webpack.plugins.ts | ||
| webpack.renderer.config.ts | ||
| webpack.rules.ts | ||
README.md
Desktop
This app builds upon Ollama to provide a desktop experience for running models.
Developing
First, build the ollama binary:
cd ..
go build .
Then run the desktop app with npm start:
cd macapp
npm install
npm start