thor.analy.analyze_gene_expression_gradient
- thor.analy.analyze_gene_expression_gradient(adata, img_key: str = 'fullres', layer_key: str = None, range_from_edge: Tuple[int, int] = [-150, 150], baseline_from_edge: Tuple[int, int] = [-150, -100], bin_size: int = 30, n_top_genes: int = 10, min_mean_gene_expression: float = 0.1, tmpout_path: str = 'geg.json') Tuple[DataFrame, ndarray, ndarray] [source]
Analyze gene expression against a baseline in a selected region of interest (ROI).
- Parameters:
adata (
anndata.AnnData
) – The input data matrix.img_key (
str
, optional) – The key for the image where the json ROI is drawn. Default is “fullres”. Valid options are “lowres” (unlikely), “hires” (unlikely), and “fullres”.layer_key (
str
, optional) – The key for the layer data in adata.layers.range_from_edge (
tuple
ofint
, optional) – The range of the ROI from the edge of the image.baseline_from_edge (
tuple
ofint
, optional) – The range of the baseline from the edge of the image.bin_size (
int
, optional) – The size of the bins for computing the differential gene expression.n_top_genes (
int
, optional) – The number of top genes to plot.min_mean_gene_expression (
float
, optional) – The minimum mean gene expression to filter genes.tmpout_path (
str
, optional) – The path to the temporary output file.
- Returns:
A tuple containing the differential gene expression dataframe, the ROI polygon, and the baseline polygon.
- Return type: