BigFeta schema

class bigfeta.schemas.BigFetaSchema(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: argschema.schemas.ArgSchema

The input schema used by the BigFeta solver

This schema is designed to be a schema_type for an ArgSchemaParser object
BigFetaSchema
key description default field_type json_type
input_json file path of input json file NA InputFile str
output_json file path to output json file NA OutputFile str
log_level set the logging level of the module ERROR LogLevel str
first_section first section for matrix assembly (REQUIRED) Integer int
last_section last section for matrix assembly (REQUIRED) Integer int
n_parallel_jobs number of parallel jobs that will run for retrieving tilespecs, assembly from pointmatches, and import_tilespecs_parallel 4 Integer int
processing_chunk_size number of pairs per multiprocessing job. can help parallelizing pymongo calls. 1 Integer int
solve_type Solve type options (montage, 3D) montage String str
close_stack Set output stack to state COMPLETE? True Boolean bool
overwrite_zlayer delete section before import tilespecs? True Boolean bool
profile_data_load module will raise exception after timing tilespec read False Boolean bool
transformation transformation to use for the solve AffineModel String str
fullsize_transform use fullsize affine transform False Boolean bool
poly_order order of polynomial transform. 2 Integer int
output_mode none: just solve and show logging output hdf5: assemble to hdf5_options.output_dir stack: write to output stack none String str
assemble_from_file path to an hdf5 file for solving from hdf5 output.mainly for testing purposes. hdf5 output usually to be solved by external solver   String str
ingest_from_file path to an hdf5 file output from the external solver.   String str
render_output anything besides the default will show all the render stderr/stdout null String str
input_stack specifies the origin of the tilespecs. NA input_stack dict
output_stack specifies the destination of the tilespecs. NA output_stack dict
pointmatch specifies the origin of the point correspondences NA pointmatch dict
hdf5_options options invoked if output_mode is ‘hdf5’ NA hdf5_options dict
matrix_assembly options that control which correspondences are included in the matrix equation and their weights NA matrix_assembly dict
regularization options that contol the regularization of different types of variables in the solve NA regularization dict
transform_apply tilespec.tforms[i].tform() for i in transform_apply will be performed on the matches before matrix assembly. [] List int
solve_implementation solve type to use: petsc, scipy, or default to one of the two default String str
opts = <marshmallow.schema.SchemaOpts object>
validate_data(data)
class bigfeta.schemas.input_stack(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: bigfeta.schemas.input_db

input_stack
key description default field_type json_type
owner render or mongo owner   String str
project render or mongo project   String str
name render or mongo collection name NA List str
host render host NA String str
port render port 8080 Integer int
mongo_host mongodb host em-131fs String str
mongo_port mongodb port 27017 Integer int
mongo_userName mongo user name   String str
mongo_authenticationDatabase mongo admin db   String str
mongo_password mongo pwd   String str
db_interface render: read or write via render mongo: read or write via pymongo file: read or write to file mongo String str
client_scripts see renderapi.render.RenderClient /allen/aibs/pipeline/image_processing/volume_assembly/render-jars/production/scripts String str
memGB see renderapi.render.RenderClient 5G String str
validate_client see renderapi.render.RenderClient False Boolean bool
input_file json or json.gz serialization of input None InputFile str
collection_type ‘stack’ or ‘pointmatch’ stack String str
use_rest passed as arg in import_tilespecs_parallel False Boolean bool
opts = <marshmallow.schema.SchemaOpts object>
validate_data(data)
class bigfeta.schemas.output_stack(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: bigfeta.schemas.db_params

output_stack
key description default field_type json_type
owner render or mongo owner   String str
project render or mongo project   String str
name render or mongo collection name NA List str
host render host NA String str
port render port 8080 Integer int
mongo_host mongodb host em-131fs String str
mongo_port mongodb port 27017 Integer int
mongo_userName mongo user name   String str
mongo_authenticationDatabase mongo admin db   String str
mongo_password mongo pwd   String str
db_interface render: read or write via render mongo: read or write via pymongo file: read or write to file mongo String str
client_scripts see renderapi.render.RenderClient /allen/aibs/pipeline/image_processing/volume_assembly/render-jars/production/scripts String str
memGB see renderapi.render.RenderClient 5G String str
validate_client see renderapi.render.RenderClient False Boolean bool
output_file json or json.gz serialization of input stackResolvedTiles. None OutputFile str
compress_output if writing file, compress with gzip. True Boolean bool
collection_type ‘stack’ or ‘pointmatch’ stack String str
use_rest passed as kwarg to renderapi.client.import_tilespecs_parallel False Boolean bool
opts = <marshmallow.schema.SchemaOpts object>
validate_data(data)
validate_file(data)
class bigfeta.schemas.pointmatch(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: bigfeta.schemas.input_db

pointmatch
key description default field_type json_type
owner render or mongo owner   String str
project render or mongo project   String str
name render or mongo collection name NA List str
host render host NA String str
port render port 8080 Integer int
mongo_host mongodb host em-131fs String str
mongo_port mongodb port 27017 Integer int
mongo_userName mongo user name   String str
mongo_authenticationDatabase mongo admin db   String str
mongo_password mongo pwd   String str
db_interface render: read or write via render mongo: read or write via pymongo file: read or write to file mongo String str
client_scripts see renderapi.render.RenderClient /allen/aibs/pipeline/image_processing/volume_assembly/render-jars/production/scripts String str
memGB see renderapi.render.RenderClient 5G String str
validate_client see renderapi.render.RenderClient False Boolean bool
input_file json or json.gz serialization of input None InputFile str
collection_type ‘stack’ or ‘pointmatch’ pointmatch String str
opts = <marshmallow.schema.SchemaOpts object>
class bigfeta.schemas.hdf5_options(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: argschema.schemas.DefaultSchema

hdf5_options
key description default field_type json_type
output_dir path to directory to hold hdf5 output.   String str
chunks_per_file how many sections with upward-looking cross section to write per .h5 file 5 Integer int
opts = <marshmallow.schema.SchemaOpts object>
class bigfeta.schemas.matrix_assembly(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: argschema.schemas.DefaultSchema

matrix_assembly
key description default field_type json_type
depth depth in z for matrix assembly point matches [0, 1, 2] List int
explicit_weight_by_depth explicitly set solver weights by depth None List float
cross_pt_weight weight of cross section point matches 1.0 Float float
montage_pt_weight weight of montage point matches 1.0 Float float
npts_min disregard any tile pairs with fewer points than this 5 Integer int
npts_max truncate any tile pairs to this size 500 Integer int
choose_random choose random pts to meet npts_max vs. just first npts_max False Boolean bool
inverse_dz cross section point match weighting fades with z True Boolean bool
check_explicit(data)
opts = <marshmallow.schema.SchemaOpts object>
tolist(data)
class bigfeta.schemas.regularization(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: argschema.schemas.DefaultSchema

regularization
key description default field_type json_type
default_lambda common regularization value 0.005 Float float
translation_factor translation regularization factor. multiplies default_lambda 0.005 Float float
poly_factors List of regularization factors by order (0, 1, …, n) will override other settings for Polynomial2DTransform. multiplies default_lambda None List float
thinplate_factor regularization factor for thin plate spline control points. multiplies default_lambda. 1e-05 Float float
opts = <marshmallow.schema.SchemaOpts object>
class bigfeta.schemas.BigFetaPlotSchema(extra=None, only=None, exclude=(), prefix='', strict=None, many=False, context=None, load_only=(), dump_only=(), partial=False)

Bases: bigfeta.schemas.BigFetaSchema

This schema is designed to be a schema_type for an ArgSchemaParser object
BigFetaPlotSchema
key description default field_type json_type
input_json file path of input json file NA InputFile str
output_json file path to output json file NA OutputFile str
log_level set the logging level of the module ERROR LogLevel str
first_section first section for matrix assembly (REQUIRED) Integer int
last_section last section for matrix assembly (REQUIRED) Integer int
n_parallel_jobs number of parallel jobs that will run for retrieving tilespecs, assembly from pointmatches, and import_tilespecs_parallel 4 Integer int
processing_chunk_size number of pairs per multiprocessing job. can help parallelizing pymongo calls. 1 Integer int
solve_type Solve type options (montage, 3D) montage String str
close_stack Set output stack to state COMPLETE? True Boolean bool
overwrite_zlayer delete section before import tilespecs? True Boolean bool
profile_data_load module will raise exception after timing tilespec read False Boolean bool
transformation transformation to use for the solve AffineModel String str
fullsize_transform use fullsize affine transform False Boolean bool
poly_order order of polynomial transform. 2 Integer int
output_mode none: just solve and show logging output hdf5: assemble to hdf5_options.output_dir stack: write to output stack none String str
assemble_from_file path to an hdf5 file for solving from hdf5 output.mainly for testing purposes. hdf5 output usually to be solved by external solver   String str
ingest_from_file path to an hdf5 file output from the external solver.   String str
render_output anything besides the default will show all the render stderr/stdout null String str
input_stack specifies the origin of the tilespecs. NA input_stack dict
output_stack specifies the destination of the tilespecs. NA output_stack dict
pointmatch specifies the origin of the point correspondences NA pointmatch dict
hdf5_options options invoked if output_mode is ‘hdf5’ NA hdf5_options dict
matrix_assembly options that control which correspondences are included in the matrix equation and their weights NA matrix_assembly dict
regularization options that contol the regularization of different types of variables in the solve NA regularization dict
transform_apply tilespec.tforms[i].tform() for i in transform_apply will be performed on the matches before matrix assembly. [] List int
solve_implementation solve type to use: petsc, scipy, or default to one of the two default String str
z1 first z for plot 1000 Integer int
z2 second z for plot 1000 Integer int
zoff z offset between pointmatches and tilespecs 0 Integer int
plot make a plot, otherwise, just text output True Boolean bool
savefig save to a pdf False Boolean bool
plot_dir no description ./ String str
threshold threshold for colors in residual plot [pixels] 5.0 Float float
density whether residual plot is density (for large numbers of points) or just points True Boolean bool
opts = <marshmallow.schema.SchemaOpts object>