Age | Commit message (Expand) | Author |
---|---|---|
2023-07-13 | Revert "Support using mmap when applying LoRA (#2095)" (#2206) | Howard Su |
2023-07-11 | Support using mmap when applying LoRA (#2095) | Howard Su |
2023-06-26 | ggml : add NUMA support (#1556) | zrm |
2023-06-05 | metal : use shared buffers between CPU and GPU (#1696) | kiltyj |
2023-05-20 | cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti... | Johannes Gäßler |
2023-05-20 | llama : fix name shadowing and C4146 (#1526) | Maxime |
2023-05-04 | Wrap exceptions in std::exception to verbose output on exception. (#1316) | Ivan Stepanov |
2023-05-01 | llama : update stubs for systems without mmap and mlock (#1266) | xloem |
2023-05-01 | cuBLAS: fall back to pageable memory if pinned alloc fails (#1233) | slaren |
2023-04-29 | examples : fix save-load-state + rename llama-util.h | Georgi Gerganov |