83 Commits

Author SHA1 Message Date
Elian Doran
3ac248169f
chore(build-docs): fix typecheck issues
Some checks failed
Checks / main (push) Has been cancelled
2025-11-01 23:40:27 +02:00
Elian Doran
6b6322798e
test(server): protected session timeout crashing on LLM tests 2025-10-09 12:51:11 +03:00
Elian Doran
c671f91bca
test(server): fix test failures 2025-10-07 14:27:05 +03:00
Elian Doran
df6bb7e6bf
fix(deps): broken types after major update 2025-10-01 19:21:51 +03:00
Elian Doran
4cd0702cbb
refactor: proper websocket message types 2025-09-13 12:59:00 +03:00
Elian Doran
37a79aeeab
fix(server/test): non-platform agnostic test 2025-07-30 08:42:51 +03:00
Elian Doran
d8b85aad7c
chore(rebrand): change product name 2025-06-23 08:43:04 +03:00
Elian Doran
641d2b0527
test(server): skip streaming tests 2025-06-15 14:41:29 +03:00
Elian Doran
8d2d5504dd
test(server): skip memory leak test to see if it breaks the CI 2025-06-15 14:17:18 +03:00
Elian Doran
f8c1dabfd5
Revert "chore(test): skip test breaking the CI"
This reverts commit f3b6817aa7a933dc1c2e378b72c0c617499bdea1.
2025-06-15 13:59:56 +03:00
Elian Doran
f3b6817aa7
chore(test): skip test breaking the CI 2025-06-15 13:46:13 +03:00
Elian Doran
1dce202d21
test(server): try to reduce number to avoid CI crashing 2025-06-15 11:58:03 +03:00
perf3ct
e1e1eb4f51
feat(unit): add unit tests around LLM model names within outgoing requests 2025-06-10 16:27:05 +00:00
perf3ct
e96fdbf72f
fix(llm): fix logging type check 2025-06-09 00:23:02 +00:00
perf3ct
f5ad5b875e
fix(tests): resolve LLM streaming unit test failures
closer to fixing...

closer...

very close to passing...
2025-06-08 23:02:15 +00:00
perf3ct
daa32e4355
Revert "fix(unit): comment out this test for now to see if the rest pass"
This reverts commit 95a33ba3c0bd1b01c7f6f716b42864174f186698.
2025-06-08 22:02:56 +00:00
perf3ct
95a33ba3c0
fix(unit): comment out this test for now to see if the rest pass 2025-06-08 21:54:19 +00:00
perf3ct
b28387bada
feat(llm): decrease the throttle on the chunking tests lol 2025-06-08 21:47:53 +00:00
perf3ct
93cf868dcf
feat(llm): last test should be passing now 2025-06-08 21:38:57 +00:00
perf3ct
224cae6db2
fix(unit): resolve type errors 2025-06-08 21:03:07 +00:00
perf3ct
d60e795421
feat(llm): still working on fixing tests... 2025-06-08 20:39:35 +00:00
perf3ct
c6f2124e9d
feat(llm): add tests for streaming 2025-06-08 20:30:33 +00:00
perf3ct
c1bcb73337
feat(llm): also improve the llm streaming service, to make it cooperate with unit tests better 2025-06-08 18:40:20 +00:00
perf3ct
40cad2e886
fix(unit): I believe it should pass now? 2025-06-08 18:20:30 +00:00
perf3ct
a8faf5d699
fix(unit): still working on getting the LLM unit tests to pass... 2025-06-08 18:13:27 +00:00
perf3ct
e011c56715
fix(unit): no more type errors hopefully 2025-06-08 16:33:26 +00:00
Jon Fuller
d7abd3a8ed
Merge branch 'develop' into feat/llm-unit-tests 2025-06-08 08:49:08 -07:00
perf3ct
c6062f453a
fix(llm): changing providers works now 2025-06-07 23:57:35 +00:00
perf3ct
414781936b
fix(llm): always fetch the user's selected model 2025-06-07 23:36:53 +00:00
perf3ct
7f9ad04b57
feat(llm): create unit tests for LLM services 2025-06-07 21:03:54 +00:00
perf3ct
ff37050470
fix(llm): delete provider_manager for embeddings too 2025-06-07 19:33:19 +00:00
perf3ct
b0d804da08
fix(llm): remove the vectorSearch stage from the pipeline 2025-06-07 18:57:08 +00:00
perf3ct
4550c12c6e
feat(llm): remove everything to do with embeddings, part 3 2025-06-07 18:30:46 +00:00
perf3ct
44a45780b7
feat(llm): remove everything to do with embeddings 2025-06-07 18:11:12 +00:00
perf3ct
cb3844e627
fix(llm): fix duplicated text when streaming responses 2025-06-07 00:27:56 +00:00
perf3ct
6bc9b3c184
feat(llm): resolve sending double headers in responses, and not being able to send requests to ollama 2025-06-07 00:02:26 +00:00
perf3ct
20ec294774
feat(llm): still work on decomplicating provider creation 2025-06-06 20:30:24 +00:00
perf3ct
8f33f37de3
feat(llm): for sure overcomplicate what should be a very simple thing 2025-06-06 20:11:33 +00:00
perf3ct
85cfc8fbd4
feat(llm): have OpenAI provider not require API keys (for endpoints like LM Studio) 2025-06-06 19:22:39 +00:00
perf3ct
c26b74495c
feat(llm): remove LLM deprecated functions 2025-06-05 22:34:20 +00:00
perf3ct
3a4bb47cc1
feat(llm): embeddings work and are created when launching for the first ever time 2025-06-05 21:03:15 +00:00
perf3ct
bb8a374ab8
feat(llm): transition from initializing LLM providers, to creating them on demand 2025-06-05 19:27:45 +00:00
perf3ct
c1b10d70b8
feat(llm): also add functions to clear/unregister embedding providers 2025-06-05 18:59:32 +00:00
perf3ct
49e123f399
feat(llm): create endpoints for starting/stopping embeddings 2025-06-05 18:47:25 +00:00
perf3ct
fe15a0378a
fix(llm): have the model_selection_stage use the instance of the aiServiceManager 2025-06-04 20:23:06 +00:00
perf3ct
a20e36f4ee
feat(llm): change from using precedence list to using a sing specified provider for either chat and/or embeddings 2025-06-04 20:13:13 +00:00
perf3ct
b76166b0d5
fix(llm): always fetch the embedding model 2025-06-03 05:13:32 +00:00
perf3ct
d4d55b20a8
fix(llm): get rid of a lot of log.info() statements that were spammy 2025-06-03 03:00:15 +00:00
perf3ct
ab3758c9b3
refactor(llm): resolve issue with headers being sent after request was sent 2025-06-02 23:54:38 +00:00
perf3ct
e7e04b7ccd
refactor(llm): streamline chat response handling by simplifying content accumulation and removing unnecessary thinking content processing 2025-06-02 23:25:15 +00:00