Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After trying several times installing it I get constantly reload window and WSL breaks. #174

Open
venturaEffect opened this issue Jan 5, 2025 · 11 comments

Comments

@venturaEffect
Copy link

venturaEffect commented Jan 5, 2025

I've tried everything.

I uninstalled and installed fresh new WSL three times, tried different distributions, followed all the steps meticulously. Switched to WSL 1 from WSL 2, updated everything, restarted VS code and the computer. But always with this project, suddenly the terminal stops and asks to reload and after a while again. I can't do anything. It is so frustrating.

I tried with different projects and everything seems fine. But is just with this project once I open the folder, activate the conda environment and try to run it. In a matter of minutes it breaks and asks to reload again. And again I have to activate environment and start new. And this all the time. No good explanation.

I followed all the steps on the README file of the repo. I even pressed the ENTER when loading the texlive-full (crazy bug).

Can't figure out what is wrong.

Any suggestion would be appreciated.

@Krakaur
Copy link

Krakaur commented Jan 5, 2025

I’ve been reflecting on the structure of this community and the challenges we face. It seems we’re dealing with a layered structure of expertise:

Founders and core developers who have deep knowledge but limited time.
Advanced contributors (like support techs) with the skills to troubleshoot and document solutions.
Persistent non-technical users (scientists) who are motivated but lack the tools to overcome certain barriers.

This last group—potentially the largest—represents the future user base, yet they are the most limited right now. To bridge this gap, I propose forming a subcommunity dedicated to technical support and documentation. This group could:

Create videos, guides, and FAQs to explain solutions.
Use platforms like Discord or Reddit to engage users in real-time support.
Act as intermediaries, collecting feedback and reporting major issues to the developers.

This wouldn’t replace GitHub but complement it by creating a more accessible space for non-technical users. Would anyone else be interested in helping start something like this?

@venturaEffect
Copy link
Author

Not sure why you comment this on my issue. Not related.

@Krakaur
Copy link

Krakaur commented Jan 6, 2025

I´m trying to work it out too. If I´m able to, i´ll come back here.

@venturaEffect
Copy link
Author

venturaEffect commented Jan 6, 2025

Appreciate. Hope you have success because I still cant figure out what is wrong.

@Krakaur
Copy link

Krakaur commented Jan 9, 2025

Ok, here is my report.

Context: I have Windows 10, a limited laptop, no GPU, and I don't want to spend money paying for tokens just to experiment. Although I am an IT professional, I'm not a developer and my programming experience is limited. I hadn't used Python before. Despite this, I managed to run the code up to the idea generation point, novelty checking, and relevance verification. For now, I plan to perfect several elements before moving forward. I think my experience can be useful to people in similar conditions.

I tried installing Linux on a virtual machine. Terrible idea (a week wasted). The next option was to use Visual Code Web. This worked much better, but eventually, I started having hosting issues. (VSCodeWeb uses browser cache).

The solution was GitHub's educational package and Codespaces. This has worked wonderfully for me. The educational package works for a year and gives me 32 gigabytes of hosting space, access to GitHub Copilot, the environment and project use 70% of the space, but I don't need more. Additionally, there are many developer benefits. The other great support was ChatGPT 4, as an expert assistant. With each conflict, I present a screenshot, and we discuss possible solution options.

Before making the first execution, I played around with the code a bit. I'm interested in doing literature reviews, so I asked GPT4 for a program to fragment a RIS file I downloaded from Web of Science with 1500 abstracts. The goal was to feed a model to verify the relevance of each abstract in relation to a description of the review I plan to do. I used a free Huggingface API to practice. I extensively modified launch_AIscientist.py. The routine worked well, and we generated 10 files with 10 abstracts each. GPT-2 had very limited performance, but the proof of concept (getting a response from the model) gave me the confidence to tackle the complete program. I renamed launch as launch_backup and uploaded the original launch, which I started working on.

I knew the first problem would be the absence of an API. I reviewed my options and settled on DeepSeek V3. Currently (January 2025), there's a promotion of 17 million free tokens (equivalent to $5), which will last all month. No credit card required, just email registration.

On the other hand, Semantic Scholar gave me errors in the first 4 attempts but worked on the fifth. I adjusted the response time to not overload the system.

Through ChatGPT 4's advice, I progressively modified the code of Launch_Scientist.py, lmm.py, and generate_ideas. This took me about 3 days of work. The list of changes was as follows.

The list of changes is below

Note: This is an excellent project if you intend to customize the code to adapt it to your needs. This involves a lot of work, but it can be very profitable. If one intends to invest time and work, small gains can be achieved.

Note 2: As next steps, I plan to incorporate a list of failsafes (alternatives) in case Semantic Scholar stops working, so the program uses these options when receiving error codes. The list of alternatives is in one of the threads in this repository. So far, I've only spent $0.01 (one cent) on the tests I've conducted. But I'm not considering making an investment. For the idea generation part and relevance verification, I plan to do it through personal dialogue, which I will maintain with the reasoning function of GPT-1, Opus, and Deepseek, without needing APIs. Idea generation comes easily to me, and the question of relevance seems like an interesting debate that I can carry out through appropriate prompts. My intention is that the methodology will be obtained from this same dialogue.
The matter of experimentation, as implemented in the templates, could have very good applications in certain areas of engineering, mathematics, and physics. The GPU and processing capacity issue is complicated. It's possible that much greater capacity than what I have is needed for certain simulations. However, the GitHub education package gives me $100 in Microsoft Azure valid for a year, and there's another $200, valid for a month, in another personal use modality. That would be enough to do some tests and see how much I can achieve. If it's really worth it, I'll invest in a GPU. If not, I'll probably write a paper about the advantage of using innovative algorithms that allow low consumption.

image

@Krakaur
Copy link

Krakaur commented Jan 9, 2025

The list of changes to the files:

1. File launch_scientist.py

  1. Dynamic Request for Number of Ideas and Iterations:
    • The code was modified so that the program asks the user for the number of ideas and iterations per idea before starting execution, instead of using fixed values.

2. File llm.py

  1. Integration of Support for deepseek-chat:

    • Support was added for the DeepSeek V3 client by configuring the deepseek-chat model in the create_client function.
  2. Definition of the DeepSeekClient Class:

    • A client was implemented to interact with the DeepSeek API, handling requests and responses in JSON format.
  3. Update of the AVAILABLE_LLMS List:

    • The deepseek-chat model was added to the list of supported models so that the program recognizes it as valid.

3. File generate_ideas.py

  1. Debugging with print and Idea Validation:

    • Logic was added to print generated ideas in their raw format and verify that they have the necessary fields (such as Name) before processing them.
  2. Error Handling with Informative Messages:

    • Messages that print response data in case of errors were implemented to better understand API failures.
  3. Modification of the on_backoff Function:

    • The dynamic wait time logic was adjusted to handle request limits (429 Too Many Requests). The wait time starts at 2 seconds and increases by 1 second per failed attempt.

4. General

  1. Dynamic Testing with Reduced Configurations:

    • During testing, the quantities of ideas and iterations were adjusted to reduce resource consumption and facilitate debugging.
  2. Inspection of Raw Responses:

    • Complete responses from the DeepSeek and Semantic Scholar APIs were printed to diagnose errors and ensure requests were properly formatted.
  3. Wait Time Control:

    • Wait times in retries were adjusted to better handle request limits.

Impact of Changes

  • Better Compatibility: The program now supports the DeepSeek V3 model (deepseek-chat).
  • Error Reduction: The adjusted backoff logic and debug messages help handle API errors more efficiently.
  • Greater Flexibility: The dynamic request for number of ideas and iterations allows for more precise control during execution.

@venturaEffect
Copy link
Author

Appreciate a lot your effort.

I would not have any issues making it work, if this regular stops, by no reason, when running the repo on WSL wouldn't happen.

There is no LLM that is helping in such issue because there are no real error messages. The tips are general tips that aren't solving the issue that comes directly of the code of this repo.

So, I guess others using maybe the same set: Windows 11, WSL and GPU (me 4090) would encounter same issues. Because from my side all the rest works fine. I use it all the time with several projects.

Hope maybe some others can give their suggestions.

Again, appreciate your effort.

@Krakaur
Copy link

Krakaur commented Jan 9, 2025

Here's the English translation:

Well, there's a post from the author that mentions something like "this implementation was done in Linux, doing it in Windows might require significant adjustments." That was the reason why I tried to do it in a virtual machine.

Theoretically, it's possible to download Visual Code and operate from there, locally, without needing to use the web environment. But... basically this implies, in a way, emulating certain Linux environment elements in Windows, and generally that involves many limitations. Historically, it has always worked poorly.

Setting up a virtual machine with VMware or Virtual Box could solve the problem, but I couldn't do it because of my laptop's memory limitations. The alternative I was thinking about was installing Boot Camp, partitioning my hard drive and managing both operating systems, but I didn't like the idea of doing that either. For me, the Linux learning curve is very steep. However, if I end up modifying the system to the extent I'm thinking about, it's a possibility, including investing a considerable amount in equipment with good GPU. But... it all depends on the limitations I encounter later on. Maybe with VS Code web I can create executables that are customizable enough to run them in Windows, and thus parallelize. So far I haven't found any problems with resources. Maybe this will change during the experimentation process.

I think, indeed, the problem is with WSL. I'm going to research some of this.

@Krakaur
Copy link

Krakaur commented Jan 9, 2025

What you describe "restarted VS code ...the terminal stops and asks to reload " seems to be a very specific problem, related to both VScode and WSL. The message "ask to reload" in this specific circumstance may be the clue.
Carefully follow the steps, in the pdf. It may work
image

WSL restart.pdf

The original thread is in [https://superuser.com/questions/1713488/wsl-stops-responding ]

But is the part with mention of vscode what seems the best shot for me.

@venturaEffect
Copy link
Author

Sorry for not responding earlier, family time. Will look into this solution. Appreciate a lot your efforts!

@venturaEffect
Copy link
Author

venturaEffect commented Jan 13, 2025

Ok, I've spent all day, followed the guide of the PDF you have suggested.

Uninstalled WSL, went to BIOS to configure the Virtualization to disable and enable, Installed brand new WSL 2 (even if others have suggested it gives errors but after trying back to WSL 1 returned to WSL 2).

Updated VS code, created a new Ubuntu-22.04 distro, WSL worked ok. Cloned again AI scientist, created the environment with the correct version of python. Installed the conflictive library it tells on the README.md that you have to press ENTER during installation (???) and pip installed the requirements.txt. After this, boom WSL breaks and the popup asking to reload appears again.

As tried before many times again, It is clearly an issue with AI Scientist, WSL on Windows 11.

Thanks for your support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants