-
I'm working on a gentle introduction to writing LSP-based tooling, using So I'm looking for suggestions/guidance. I could go completely independent and use - I think? - the json-rpc package to implement the client. However, I suspect I'd just end up re-implementing much of what's already built in To make that more concrete, here's the skeleton of the test I want to write: from server.server import greet_server # instance of GreetLanguageServer, a subclass of LanguageServer
def test_invalid_greeting_returns_correct_diagnostic():
# arrange
address = "127.0.0.1"
port = 4242
server = greet_server.start_tcp(address, port) # TODO: need to start concurrently, this is synchronous
client = LanguageServerTestClient(address, port) # TODO: implement this class
did_open_notification = # TODO: construct instance of did_open message
# act
client.send_notification(did_open_notification)
msg = client.await_notification()
# assert
#TODO: extract diagnostic and assert range and message as expected.
server.shutdown() Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
Alex Carney (I'm sure he'll chime in anyway), is actually working on a dedicated Pygls testing framework https://github.com/alcarney/lsp-devtools/#libpytest-lsp---end-to-end-testing-of-language-servers-with-pytest |
Beta Was this translation helpful? Give feedback.
-
This is how Alternatively... as @tombh mentioned you might be interested in the Anyway, if you're interested an example test case can be seen in the project's readme |
Beta Was this translation helpful? Give feedback.
-
Thanks both, I'll have a look at your Appreciate the quick responses too. |
Beta Was this translation helpful? Give feedback.
-
@alcarney I finally had an opportunity to look at pytest-lsp -- thanks for writing and making available. I have some questions (and hope OK to post here - let me know if not). Summary is: The readme and getting started examples use different approaches, and I'd like to understand if one is preferred over the other. I have a minimally working example that draws from both: # test_server_readme.py
import sys
import pytest
import pytest_lsp
from pytest_lsp import ClientServerConfig
@pytest_lsp.fixture(
# scope='session',
config=ClientServerConfig(
server_command=[sys.executable, "toy_server.py"],
root_uri="file:///path/to/test/project/root/"
),
)
async def client():
pass
@pytest.mark.asyncio
async def test_completion(client):
test_uri="file:///path/to/test/project/root/test_file.rst"
result = await client.completion_request(test_uri, line=5, character=23)
assert len(result) == 2 and here's the toy server: # toy_server.py
from lsprotocol.types import TEXT_DOCUMENT_COMPLETION
from lsprotocol.types import CompletionItem
from lsprotocol.types import CompletionParams
from pygls.server import LanguageServer
server = LanguageServer("hello-world", "v1")
@server.feature(TEXT_DOCUMENT_COMPLETION)
def completion(ls: LanguageServer, params: CompletionParams):
return [
CompletionItem(label="hello"),
CompletionItem(label="world"),
]
if __name__ == "__main__":
server.start_io() This passes, albeit I had to comment out
I notice the getting started example uses a different approach to the fixture (which seems consistent with standard # test_server_docs.py
import sys
from lsprotocol.types import ClientCapabilities
from lsprotocol.types import InitializeParams
import pytest_lsp
from pytest_lsp.client import LanguageClient
from pytest_lsp.plugin import ClientServerConfig
@pytest_lsp.fixture(
config=ClientServerConfig(server_command=[sys.executable, "toy_server.py"]),
)
async def client(lsp_client: LanguageClient):
# Setup
params = InitializeParams(capabilities=ClientCapabilities())
await lsp_client.initialize(params)
yield
# Teardown
await lsp_client.shutdown()
async def test_completions(client: LanguageClient):
"""Ensure that the server implements completions correctly."""
results = await client.completion_request(
uri="file:///path/to/file.txt", line=1, character=0
)
labels = [item.label for item in results]
assert labels == ["hello", "world"] However, pytest fails when I run that:
I'm conscious you're in active development, so would be grateful of any pointers on how best to proceed. Running Python 3.10.6 on WSL (Ubuntu 22.04), packages as follows: $ python3 -m pip freeze
appdirs==1.4.4
attrs==22.2.0
cattrs==22.2.0
exceptiongroup==1.1.1
iniconfig==2.0.0
lsprotocol==2023.0.0a1
packaging==23.0
pluggy==1.0.0
pygls==1.0.1
pytest==7.3.0
pytest-asyncio==0.21.0
pytest-lsp==0.2.1
tomli==2.0.1
tree-sitter==0.20.1
typeguard==2.13.3 Thanks for any suggestions. |
Beta Was this translation helpful? Give feedback.
Ah, you've managed to catch me mid-refactor! 😅
I realized that the library was too tightly coupled to the way
esbonio
currently works, so some breaking changes were required to undo some of the assumptions I had made - I'm also taking the opportunity to simplify the internal architecture a little bit.If you fancy trying the development version you should be able to try it with the following
pip install
commandI'm still expecting one more set of breaking changes to land where I align the LSP method signatures on the test client to match the client in #328 since they've drifted a bit, b…