-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* comments to refactor * old changes * minor fixes * replaced json file with indexes with db * fixed blob reading * updated writing to DB * updated writing to DB for centroids * pickling fix * refactor expressions data array * db refactoring * refactoring providers * Continue refactoring fast cache assessor etc for databased cache files. * Fix new bug related to apt and postgres docker image. * Remove file-based fast cache assessor tests. * Deprecate file-base caching test. * Remove unused artifacts * Update test and fix/update imports. * Add one minimal test * Deprecate reference to unused files * Fix bug that was clearing out studies-loaded-in-progress prematurely. * Make study vs measurement study consistent for centroids * Fix bug with premature clearing of expressions locally. * Make study vs measurement study consistent in another place. * Deprecate reference to indexed samples * Update logger info messages. * Deprecate volume mounting for ondemand container. * Refactor to make CLI cache pulling the more advanced operation, reasonable to run after data upload. * Start updating data loaded containers to have cache prebuilt. * FIx missing parameter in import dataset scripts * Make caching possibly study specific * Fix some mistakes in extracting from study.json * Remove old invocation of caching * Version bump --------- Co-authored-by: Grigory Frantsuzov <[email protected]> Co-authored-by: Jimmy Mathews <[email protected]> Co-authored-by: James Mathews <[email protected]>
- Loading branch information
1 parent
242b14a
commit a65f533
Showing
56 changed files
with
430 additions
and
878 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
"""Count number of cached files of given types in cached-files area of database.""" | ||
|
||
from spatialprofilingtoolbox.db.database_connection import DBCursor | ||
from spatialprofilingtoolbox.db.database_connection import retrieve_study_names | ||
|
||
def get_counts(database_config_file: str, blob_type: str, study: str | None = None) -> dict[str, int]: | ||
if study is None: | ||
studies = tuple(retrieve_study_names(database_config_file)) | ||
else: | ||
studies = (study,) | ||
counts: dict[str, int] = {} | ||
for study in studies: | ||
with DBCursor(database_config_file=database_config_file, study=study) as cursor: | ||
cursor.execute(f''' | ||
SELECT COUNT(*) FROM ondemand_studies_index osi | ||
WHERE osi.blob_type='{blob_type}' ; | ||
''') | ||
count = tuple(cursor.fetchall())[0][0] | ||
counts[study] = count | ||
return counts | ||
|
||
|
||
def drop_cache_files(database_config_file: str, blob_type: str, study: str | None = None) -> None: | ||
if study is None: | ||
studies = tuple(retrieve_study_names(database_config_file)) | ||
else: | ||
studies = (study,) | ||
for _study in studies: | ||
with DBCursor(database_config_file=database_config_file, study=_study) as cursor: | ||
cursor.execute(f''' | ||
DELETE FROM ondemand_studies_index osi | ||
WHERE osi.blob_type='{blob_type}' ; | ||
''') | ||
|
||
|
||
def retrieve_expressions_index(database_config_file: str, study: str) -> str: | ||
with DBCursor(database_config_file=database_config_file, study=study) as cursor: | ||
cursor.execute(''' | ||
SELECT blob_contents FROM ondemand_studies_index osi | ||
WHERE osi.blob_type='expressions_index' ; | ||
''') | ||
result_blob = bytearray(tuple(cursor.fetchall())[0][0]) | ||
return result_blob.decode(encoding='utf-8') | ||
|
||
|
||
def retrieve_indexed_samples(database_config_file: str, study: str) -> tuple[str, ...]: | ||
with DBCursor(database_config_file=database_config_file, study=study) as cursor: | ||
cursor.execute(''' | ||
SELECT specimen FROM ondemand_studies_index osi | ||
WHERE osi.blob_type='feature_matrix' ; | ||
''') | ||
specimens = tuple(r[0] for r in cursor.fetchall()) | ||
return specimens |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.