Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix port numebr in back end and add end point configeration to example #96

Merged
merged 10 commits into from
Jan 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backend/.env
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
PORT=8090
PORT=8080
JWT_SECRET="JACKSONCHENNAHEULALLEN"
SALT_ROUNDS=123
8,338 changes: 0 additions & 8,338 deletions backend/pnpm-lock.yaml

This file was deleted.

13 changes: 5 additions & 8 deletions backend/src/build-system/handlers/backend/code-generate/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ import {
MissingConfigurationError,
ResponseParsingError,
} from 'src/build-system/errors';
import { Logger } from '@nestjs/common';

/**
* BackendCodeHandler is responsible for generating the backend codebase
Expand Down Expand Up @@ -65,20 +64,18 @@ export class BackendCodeHandler implements BuildHandler<string> {
databaseType,
databaseSchemas,
currentFile,
'javascript',
'javascript', // TODO: make sure this lang come from the context
dependencyFile,
);

let generatedCode: string;
try {
// Invoke the language model to generate the backend code
const messages: MessageInterface[] = [
{ content: backendCodePrompt, role: 'system' },
];
const modelResponse = await chatSyncWithClocker(
context,
messages,
'gpt-4o-mini',
{
model: 'gpt-4o-mini',
messages: [{ content: backendCodePrompt, role: 'system' }],
},
'generateBackendCode',
this.id,
);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import {
FileNotFoundError,
FileModificationError,
ResponseParsingError,
ModelUnavailableError,
} from 'src/build-system/errors';

/**
Expand Down Expand Up @@ -47,12 +48,11 @@ export class BackendFileReviewHandler implements BuildHandler<string> {
try {
this.logger.log(`Scanning backend directory: ${backendPath}`);
files = await fs.readdir(backendPath);
if (!files.length) {
throw new FileNotFoundError('No files found in the backend directory.');
}
this.logger.debug(`Found files: ${files.join(', ')}`);
} catch (error) {
throw error;
throw new FileNotFoundError(
'No files found in the backend directory:' + error,
);
}

const filePrompt = prompts.identifyBackendFilesToModify(
Expand All @@ -70,13 +70,15 @@ export class BackendFileReviewHandler implements BuildHandler<string> {
];
modelResponse = await chatSyncWithClocker(
context,
messages,
'gpt-4o-mini',
{
model: 'gpt-4o-mini',
messages,
},
'generateBackendCode',
this.id,
);
} catch (error) {
throw error;
throw new ModelUnavailableError('Model Unavailable:' + error);
}

const filesToModify = this.parseFileIdentificationResponse(modelResponse);
Expand All @@ -87,39 +89,41 @@ export class BackendFileReviewHandler implements BuildHandler<string> {

for (const fileName of filesToModify) {
const filePath = path.join(backendPath, fileName);
let currentContent: string;
try {
const currentContent = await fs.readFile(filePath, 'utf-8');
const modificationPrompt = prompts.generateFileModificationPrompt(
fileName,
currentContent,
backendRequirement,
projectOverview,
backendCode,
currentContent = await fs.readFile(filePath, 'utf-8');
} catch (error) {
throw new FileNotFoundError(
`Failed to read file: ${fileName}:` + error,
);
}
const modificationPrompt = prompts.generateFileModificationPrompt(
fileName,
currentContent,
backendRequirement,
projectOverview,
backendCode,
);

const messages: MessageInterface[] = [
{ content: modificationPrompt, role: 'system' },
];
const response = await chatSyncWithClocker(
let response;
try {
response = await chatSyncWithClocker(
context,
messages,
'gpt-4o-mini',
{
model: 'gpt-4o-mini',
messages: [{ content: modificationPrompt, role: 'system' }],
},
'generateBackendFile',
this.id,
);

const newContent = formatResponse(response);
if (!newContent) {
throw new FileModificationError(
`Failed to generate content for file: ${fileName}.`,
);
}

await fs.writeFile(filePath, newContent, 'utf-8');
this.logger.log(`Successfully modified ${fileName}`);
} catch (error) {
throw error;
throw new ModelUnavailableError('Model Unavailable:' + error);
}
const newContent = formatResponse(response);

await fs.writeFile(filePath, newContent, 'utf-8');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for file write operation.

The file write operation should be wrapped in a try-catch block to handle potential write errors.

-      await fs.writeFile(filePath, newContent, 'utf-8');
+      try {
+        await fs.writeFile(filePath, newContent, 'utf-8');
+      } catch (error) {
+        throw new FileModificationError(`Failed to write to file ${fileName}: ${error}`);
+      }

Committable suggestion skipped: line range outside the PR's diff.


this.logger.log(`Successfully modified ${fileName}`);
}

return {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,11 @@ import { BuilderContext } from 'src/build-system/context';
import { generateBackendOverviewPrompt } from './prompt';
import { Logger } from '@nestjs/common';
import { removeCodeBlockFences } from 'src/build-system/utils/strings';
import { chatSyncWithClocker } from 'src/build-system/utils/handler-helper';
import { MessageInterface } from 'src/common/model-provider/types';
import {
MissingConfigurationError,
ModelUnavailableError,
ResponseParsingError,
} from 'src/build-system/errors';
import { chatSyncWithClocker } from 'src/build-system/utils/handler-helper';

type BackendRequirementResult = {
overview: string;
Expand Down Expand Up @@ -68,27 +66,17 @@ export class BackendRequirementHandler
let backendOverview: string;

try {
const messages: MessageInterface[] = [
{ content: overviewPrompt, role: 'system' },
];
backendOverview = await chatSyncWithClocker(
context,
messages,
'gpt-4o-mini',
{
model: 'gpt-4o-mini',
messages: [{ content: overviewPrompt, role: 'system' }],
},
'generateBackendOverviewPrompt',
this.id,
);

if (!backendOverview) {
throw new ModelUnavailableError(
'The model did not respond within the expected time.',
);
}
if (backendOverview.trim() === '') {
throw new ResponseParsingError('Generated backend overview is empty.');
}
} catch (error) {
throw error;
throw new ModelUnavailableError('Model is unavailable:' + error);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add response validation before usage.

Similar to other handlers, we should validate the model's response before using it.

     } catch (error) {
       throw new ModelUnavailableError('Model is unavailable:' + error);
     }
+    
+    if (!backendOverview?.trim()) {
+      throw new ModelUnavailableError('Model returned empty response');
+    }

Committable suggestion skipped: line range outside the PR's diff.

}

// Return generated data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,7 @@ import { Logger } from '@nestjs/common';
import { removeCodeBlockFences } from 'src/build-system/utils/strings';
import {
MissingConfigurationError,
ResponseParsingError,
ModelUnavailableError,
TemporaryServiceUnavailableError,
RateLimitExceededError,
} from 'src/build-system/errors';
import { chatSyncWithClocker } from 'src/build-system/utils/handler-helper';
import { MessageInterface } from 'src/common/model-provider/types';
Expand Down Expand Up @@ -41,34 +38,17 @@ export class DatabaseRequirementHandler implements BuildHandler<string> {
let dbRequirementsContent: string;

try {
const messages: MessageInterface[] = [
{ content: prompt, role: 'system' },
];
dbRequirementsContent = await chatSyncWithClocker(
context,
messages,
'gpt-4o-mini',
{
model: 'gpt-4o-mini',
messages: [{ content: prompt, role: 'system' }],
},
'generateDatabaseRequirementPrompt',
this.id,
);

if (!dbRequirementsContent) {
throw new ModelUnavailableError(
'The model did not respond within the expected time.',
);
}

if (dbRequirementsContent.trim() === '') {
throw new ResponseParsingError(
'Generated database requirements content is empty.',
);
}
} catch (error) {
this.logger.error(
'Error during database requirements generation:',
error,
);
throw error; // Propagate error to upper-level handler
throw new ModelUnavailableError('Model Unavailable:' + error);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add response validation before usage.

While the error handling has been simplified, we should validate the model's response before using it. An empty or malformed response could cause issues downstream.

     } catch (error) {
       throw new ModelUnavailableError('Model Unavailable:' + error);
     }
+    
+    if (!dbRequirementsContent?.trim()) {
+      throw new ModelUnavailableError('Model returned empty response');
+    }

Committable suggestion skipped: line range outside the PR's diff.

}

return {
Expand Down
Loading
Loading