116 Commits

Author SHA1 Message Date
Elian Doran
b7f5c0e07a
feat(mention): disable auto-completion 2025-06-23 23:20:51 +03:00
Elian Doran
3988bb5321
feat(emoji): disable auto-completion only 2025-06-23 22:40:57 +03:00
Elian Doran
ba94616b87
feat(emoji): add an option to disable them (closes #5852) 2025-06-23 22:10:41 +03:00
Elian Doran
9713864bb6
refactor(cpu_dialog): more mentions of rosetta 2025-06-12 22:53:15 +03:00
Elian Doran
c6c59c63bb
feat(cpu_dialog): add actual Windows CPU detection 2025-06-12 22:49:31 +03:00
Elian Doran
a635131f47
style(cpu_dialog): use modal-footer 2025-06-12 22:23:37 +03:00
Elian Doran
8edbbe27f8
refactor(client,server): rebrand to CPU arch warnings 2025-06-12 22:16:57 +03:00
Elian Doran
db3c008c07
fix(server): headers sent twice 2025-06-12 22:07:50 +03:00
Elian Doran
f6bba436f4
Revert "fix(client): also move the logic from the server to the client lol"
This reverts commit e401c8c930c8160f5acd83cfff3b97aef3ca152a.
2025-06-12 21:46:27 +03:00
Elian Doran
59296f3045
fix(server): crashes due to req.body being undefined 2025-06-12 15:01:35 +03:00
perf3ct
ca6277f6e9
feat(llm): handle error catching in streaming better 2025-06-09 00:07:00 +00:00
perf3ct
e98fabcc9d
fix(unit): resolve auth error in llm unit test
keep working
2025-06-08 23:19:40 +00:00
perf3ct
f5ad5b875e
fix(tests): resolve LLM streaming unit test failures
closer to fixing...

closer...

very close to passing...
2025-06-08 23:02:15 +00:00
perf3ct
224cae6db2
fix(unit): resolve type errors 2025-06-08 21:03:07 +00:00
perf3ct
c6f2124e9d
feat(llm): add tests for streaming 2025-06-08 20:30:33 +00:00
perf3ct
c1bcb73337
feat(llm): also improve the llm streaming service, to make it cooperate with unit tests better 2025-06-08 18:40:20 +00:00
perf3ct
0ce5307c0b
fix(llm): well this has been using the wrong value the whole time 2025-06-07 23:16:44 +00:00
perf3ct
d8bbece02a
feat(e2e): llm tests mostly pass 2025-06-07 23:07:54 +00:00
perf3ct
bb483558b0
feat(llm): add e2e tests for llm 2025-06-07 22:41:55 +00:00
perf3ct
4550c12c6e
feat(llm): remove everything to do with embeddings, part 3 2025-06-07 18:30:46 +00:00
perf3ct
44a2e7df21
feat(llm): remove everything to do with embeddings, part 2 2025-06-07 18:20:06 +00:00
perf3ct
44a45780b7
feat(llm): remove everything to do with embeddings 2025-06-07 18:11:12 +00:00
Elian Doran
9ead5abc62
Merge pull request #2181 from TriliumNext/feat/llm-change-to-single-provider
LLM integration, part 4
2025-06-07 11:38:30 +03:00
perf3ct
6bc9b3c184
feat(llm): resolve sending double headers in responses, and not being able to send requests to ollama 2025-06-07 00:02:26 +00:00
perf3ct
85cfc8fbd4
feat(llm): have OpenAI provider not require API keys (for endpoints like LM Studio) 2025-06-06 19:22:39 +00:00
perf3ct
c26b74495c
feat(llm): remove LLM deprecated functions 2025-06-05 22:34:20 +00:00
perf3ct
49e123f399
feat(llm): create endpoints for starting/stopping embeddings 2025-06-05 18:47:25 +00:00
perf3ct
5bc2c3ac18
feat(llm): also have the embedding provider settings be changeable 2025-06-04 22:58:20 +00:00
perf3ct
a20e36f4ee
feat(llm): change from using precedence list to using a sing specified provider for either chat and/or embeddings 2025-06-04 20:13:13 +00:00
SiriusXT
3cdee1ac86 Merge branch 'develop' into date/time 2025-06-04 16:48:37 +08:00
Elian Doran
df7f0d4099
Merge pull request #2110 from TriliumNext/feat/llm-integration-part3
LLM Integration, part 3
2025-06-04 11:41:49 +03:00
perf3ct
3050424d53
fix(llm): don't filter for specific words when pulling models for openai 2025-06-03 20:47:16 +00:00
perf3ct
336cd1fbda
fix(llm): storing >1 message in a chat note works
fix(llm): storing >1 message in a chat note works
2025-06-03 03:15:17 +00:00
perf3ct
d2ba270fdf
fix(llm): sending messages no longer throws an error at first 2025-06-03 00:18:45 +00:00
perf3ct
ab3758c9b3
refactor(llm): resolve issue with headers being sent after request was sent 2025-06-02 23:54:38 +00:00
perf3ct
e7e04b7ccd
refactor(llm): streamline chat response handling by simplifying content accumulation and removing unnecessary thinking content processing 2025-06-02 23:25:15 +00:00
perf3ct
aad92b57c7
fix(llm): prevent sent message duplication 2025-06-02 22:47:30 +00:00
perf3ct
ed64a5b4f7
refactor(llm): simplify chat handling by removing session store and directly integrating chat storage service 2025-06-02 22:09:59 +00:00
perf3ct
35f78aede9
feat(llm): redo chat storage, part 1 2025-06-02 00:56:19 +00:00
SiriusXT
029d6df5ec Merge branch 'develop' into date/time 2025-06-01 15:41:46 +08:00
SiriusXT
a8c4b11c9f feat(insert time): Add configurable date/time format for Alt+T shortcut 2025-06-01 15:27:50 +08:00
perf3ct
2c48a70bfb
feat(llm): use ckeditor for text input area for mention support instead of textinput 2025-06-01 03:03:26 +00:00
SngAbc
e2ac581b14
Merge pull request #2072 from vanndoublen/feature/custom-datetime-format
Feature/custom datetime format
2025-05-31 21:50:26 +08:00
Elian Doran
ba7c93967e
chore(server): fix some type errors 2025-05-28 19:03:53 +03:00
perf3ct
758b22e6b1
feat(server): remove the use of "any" for metrics endpoint 2025-05-26 20:26:03 +00:00
perf3ct
52fb5fa298
feat(server): add metrics endpoint and functionality 2025-05-26 19:50:04 +00:00
Elian Doran
79422da733
Merge pull request #2014 from FliegendeWurst/demo-mode
feat(server): add option to mount database read-only
2025-05-26 16:47:10 +03:00
FliegendeWurst
2427addf65 feat(server): override options for read-only database 2025-05-21 17:24:36 +02:00
Elian Doran
3b6679a744
refactor(serve): solve some more type errors 2025-05-21 16:00:57 +03:00
vanndoublen
f640c9212e
Merge branch 'develop' into feature/custom-datetime-format 2025-05-20 19:55:45 +08:00