pax_global_header 0000666 0000000 0000000 00000000064 14563016623 0014520 g ustar 00root root 0000000 0000000 52 comment=0d5426b5851be80dd8e51470a0784a73565a3006 mrcal-2.4.1/ 0000775 0000000 0000000 00000000000 14563016623 0012622 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/.gitattributes 0000664 0000000 0000000 00000000000 14563016623 0015503 0 ustar 00root root 0000000 0000000 mrcal-2.4.1/.gitignore 0000664 0000000 0000000 00000000537 14563016623 0014617 0 ustar 00root root 0000000 0000000 *~ *.d *.o *.so* *.docstring.h cscope* *.pyc *.orig *.i *.rej *generated* *GENERATED* core* debian/*.log debian/*.substvars debian/*-stamp debian/*.debhelper debian/files debian/*/ *.pickle test-gradients test/test-lensmodel-string-manipulation test/test-cahvor test/test-parser-cameramodel *.pod *GENERATED.* *.backup* *.fig.bak ltximg/ doc/out mrcal-2.4.1/CHOLMOD_factorization.docstring 0000664 0000000 0000000 00000005404 14563016623 0020564 0 ustar 00root root 0000000 0000000 A basic Python interface to CHOLMOD SYNOPSIS from scipy.sparse import csr_matrix indptr = np.array([0, 2, 3, 6, 8]) indices = np.array([0, 2, 2, 0, 1, 2, 1, 2]) data = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=float) Jsparse = csr_matrix((data, indices, indptr)) Jdense = Jsparse.toarray() print(Jdense) ===> [[1. 0. 2.] [0. 0. 3.] [4. 5. 6.] [0. 7. 8.]] bt = np.array(((1., 5., 3.), (2., -2., -8))) print(nps.transpose(bt)) ===> [[ 1. 2.] [ 5. -2.] [ 3. -8.]] F = mrcal.CHOLMOD_factorization(Jsparse) xt = F.solve_xt_JtJ_bt(bt) print(nps.transpose(xt)) ===> [[ 0.02199662 0.33953751] [ 0.31725888 0.46982516] [-0.21996616 -0.50648618]] print(nps.matmult(nps.transpose(Jdense), Jdense, nps.transpose(xt))) ===> [[ 1. 2.] [ 5. -2.] [ 3. -8.]] The core of the mrcal optimizer is a sparse linear least squares solver using CHOLMOD to solve a large, sparse linear system. CHOLMOD is a C library, but it is sometimes useful to invoke it from Python. The CHOLMOD_factorization class factors a matrix JtJ, and this method uses that factorization to efficiently solve the linear equation JtJ x = b. The usual linear algebra conventions refer to column vectors, but numpy generally deals with row vectors, so I talk about solving the equivalent transposed problem: xt JtJ = bt. The difference is purely notational. The class takes a sparse array J as an argument in __init__(). J is optional, but there's no way in Python to pass it later, so from Python you should always pass J. This is optional for internal initialization from C code. J must be given as an instance of scipy.sparse.csr_matrix. csr is a row-major sparse representation. CHOLMOD wants column-major matrices, so it see this matrix J as a transpose: the CHOLMOD documentation refers to this as "At". And the CHOLMOD documentation talks about factoring AAt, while I talk about factoring JtJ. These are the same thing. The factorization of JtJ happens in __init__(), and we use this factorization later (as many times as we want) to solve JtJ x = b by calling solve_xt_JtJ_bt(). This class carefully checks its input for validity, but makes no effort to be flexible: anything that doesn't look right will result in an exception. Specifically: - J.data, J.indices, J.indptr must all be numpy arrays - J.data, J.indices, J.indptr must all have exactly one dimension - J.data, J.indices, J.indptr must all be C-contiguous (the normal numpy order) - J.data must hold 64-bit floating-point values (dtype=float) - J.indices, J.indptr must hold 32-bit integers (dtype=np.int32) ARGUMENTS The __init__() function takes - J: a sparse array in a scipy.sparse.csr_matrix object mrcal-2.4.1/CHOLMOD_factorization_rcond.docstring 0000664 0000000 0000000 00000001506 14563016623 0021750 0 ustar 00root root 0000000 0000000 Compute rough estimate of reciprocal of condition number SYNOPSIS b, x, J, factorization = \ mrcal.optimizer_callback(**optimization_inputs) rcond = factorization.rcond() Calls cholmod_rcond(). Its documentation says: Returns a rough estimate of the reciprocal of the condition number: the minimum entry on the diagonal of L (or absolute entry of D for an LDLT factorization) divided by the maximum entry. L can be real, complex, or zomplex. Returns -1 on error, 0 if the matrix is singular or has a zero or NaN entry on the diagonal of L, 1 if the matrix is 0-by-0, or min(diag(L))/max(diag(L)) otherwise. Never returns NaN; if L has a NaN on the diagonal it returns zero instead. ARGUMENTS - None RETURNED VALUE A single floating point value: an estimate of the reciprocal of the condition number mrcal-2.4.1/CHOLMOD_factorization_solve_xt_JtJ_bt.docstring 0000664 0000000 0000000 00000004365 14563016623 0023750 0 ustar 00root root 0000000 0000000 Solves the linear system JtJ x = b using CHOLMOD SYNOPSIS from scipy.sparse import csr_matrix indptr = np.array([0, 2, 3, 6, 8]) indices = np.array([0, 2, 2, 0, 1, 2, 1, 2]) data = np.array([1, 2, 3, 4, 5, 6, 7, 8], dtype=float) Jsparse = csr_matrix((data, indices, indptr)) Jdense = Jsparse.toarray() print(Jdense) ===> [[1. 0. 2.] [0. 0. 3.] [4. 5. 6.] [0. 7. 8.]] bt = np.array(((1., 5., 3.), (2., -2., -8))) print(nps.transpose(bt)) ===> [[ 1. 2.] [ 5. -2.] [ 3. -8.]] F = mrcal.CHOLMOD_factorization(Jsparse) xt = F.solve_xt_JtJ_bt(bt) print(nps.transpose(xt)) ===> [[ 0.02199662 0.33953751] [ 0.31725888 0.46982516] [-0.21996616 -0.50648618]] print(nps.matmult(nps.transpose(Jdense), Jdense, nps.transpose(xt))) ===> [[ 1. 2.] [ 5. -2.] [ 3. -8.]] The core of the mrcal optimizer is a sparse linear least squares solver using CHOLMOD to solve a large, sparse linear system. CHOLMOD is a C library, but it is sometimes useful to invoke it from Python. The CHOLMOD_factorization class factors a matrix JtJ, and this method uses that factorization to efficiently solve the linear equation JtJ x = b. The usual linear algebra conventions refer to column vectors, but numpy generally deals with row vectors, so I talk about solving the equivalent transposed problem: xt JtJ = bt. The difference is purely notational. As many vectors b as we'd like may be given at one time (in rows of bt). The dimensions of the returned array xt will match the dimensions of the given array bt. Broadcasting is supported: any leading dimensions will be processed correctly, as long as bt has shape (..., Nstate) This function carefully checks its input for validity, but makes no effort to be flexible: anything that doesn't look right will result in an exception. Specifically: - bt must be C-contiguous (the normal numpy order) - bt must contain 64-bit floating-point values (dtype=float) ARGUMENTS - bt: a numpy array of shape (..., Nstate). This array must be C-contiguous and it must have dtype=float RETURNED VALUE The transpose of the solution array x, in a numpy array of the same shape as the input bt mrcal-2.4.1/LICENSE 0000664 0000000 0000000 00000026274 14563016623 0013642 0 ustar 00root root 0000000 0000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright (c) 2017-2023 California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. mrcal-2.4.1/Makefile 0000664 0000000 0000000 00000011301 14563016623 0014256 0 ustar 00root root 0000000 0000000 include choose_mrbuild.mk include $(MRBUILD_MK)/Makefile.common.header # "0" or undefined means "false" # everything else means "true" # libelas stereo matcher. Available in Debian/non-free. I don't want to depend # on anything in non-free, so I default to not using libelas USE_LIBELAS ?= 0 # convert all USE_XXX:=0 to an empty string $(foreach v,$(filter USE_%,$(.VARIABLES)),$(if $(filter 0,${$v}),$(eval undefine $v))) # to print them all: $(foreach v,$(filter USE_%,$(.VARIABLES)),$(warning $v = '${$v}')) PROJECT_NAME := mrcal ABI_VERSION := 4 TAIL_VERSION := 0 VERSION := $(VERSION_FROM_PROJECT) LIB_SOURCES += \ mrcal.c \ mrcal-opencv.c \ mrcal-image.c \ stereo.c \ poseutils.c \ poseutils-opencv.c \ poseutils-uses-autodiff.cc \ triangulation.cc \ cahvore.cc ifneq (${USE_LIBELAS},) # using libelas LIB_SOURCES := $(LIB_SOURCES) stereo-matching-libelas.cc endif BIN_SOURCES += \ test-gradients.c \ test/test-cahvor.c \ test/test-lensmodel-string-manipulation.c \ test/test-parser-cameramodel.c LDLIBS += -ldogleg -lfreeimage ifneq (${USE_LIBELAS},) # using libelas LDLIBS += -lelas endif CFLAGS += --std=gnu99 CCXXFLAGS += -Wno-missing-field-initializers -Wno-unused-variable -Wno-unused-parameter -Wno-missing-braces $(patsubst %.c,%.o,$(shell grep -l '#include .*minimath\.h' *.c */*.c)): minimath/minimath_generated.h minimath/minimath_generated.h: minimath/minimath_generate.pl ./$< > $@.tmp && mv $@.tmp $@ EXTRA_CLEAN += minimath/minimath_generated.h DIST_INCLUDE += \ mrcal.h \ mrcal-image.h \ mrcal-internal.h \ basic-geometry.h \ poseutils.h \ triangulation.h \ mrcal-types.h \ stereo.h DIST_BIN := \ mrcal-calibrate-cameras \ mrcal-convert-lensmodel \ mrcal-show-distortion-off-pinhole \ mrcal-show-splined-model-correction \ mrcal-show-projection-uncertainty \ mrcal-show-projection-diff \ mrcal-reproject-points \ mrcal-reproject-image \ mrcal-graft-models \ mrcal-to-cahvor \ mrcal-from-cahvor \ mrcal-to-kalibr \ mrcal-from-kalibr \ mrcal-from-ros \ mrcal-show-geometry \ mrcal-show-valid-intrinsics-region \ mrcal-is-within-valid-intrinsics-region \ mrcal-triangulate \ mrcal-cull-corners \ mrcal-show-residuals-board-observation \ mrcal-show-residuals \ mrcal-stereo # generate manpages from distributed binaries, and ship them. This is a hoaky # hack because apparenly manpages from python tools is a crazy thing to want to # do DIST_MAN := $(addsuffix .1,$(DIST_BIN)) # if using an older mrbuild SO won't be defined, and we need it SO ?= so # parser cameramodel-parser_GENERATED.c: cameramodel-parser.re mrcal.h re2c $< > $@.tmp && mv $@.tmp $@ LIB_SOURCES += cameramodel-parser_GENERATED.c EXTRA_CLEAN += cameramodel-parser_GENERATED.c cameramodel-parser_GENERATED.o: CCXXFLAGS += -fno-fast-math ALL_NPSP_EXTENSION_MODULES := $(patsubst %-genpywrap.py,%,$(wildcard *-genpywrap.py)) ifeq (${USE_LIBELAS},) # not using libelas ALL_NPSP_EXTENSION_MODULES := $(filter-out elas,$(ALL_NPSP_EXTENSION_MODULES)) endif ALL_PY_EXTENSION_MODULES := _mrcal $(patsubst %,_%_npsp,$(ALL_NPSP_EXTENSION_MODULES)) %/: mkdir -p $@ ######### python stuff %-npsp-pywrap-GENERATED.c: %-genpywrap.py python3 $< > $@.tmp && mv $@.tmp $@ mrcal/_%_npsp$(PY_EXT_SUFFIX): %-npsp-pywrap-GENERATED.o libmrcal.$(SO) libmrcal.$(SO).${ABI_VERSION} $(PY_MRBUILD_LINKER) $(PY_MRBUILD_LDFLAGS) $(LDFLAGS) $< -lmrcal -o $@ ALL_NPSP_C := $(patsubst %,%-npsp-pywrap-GENERATED.c,$(ALL_NPSP_EXTENSION_MODULES)) ALL_NPSP_O := $(patsubst %,%-npsp-pywrap-GENERATED.o,$(ALL_NPSP_EXTENSION_MODULES)) ALL_NPSP_SO := $(patsubst %,mrcal/_%_npsp$(PY_EXT_SUFFIX),$(ALL_NPSP_EXTENSION_MODULES)) EXTRA_CLEAN += $(ALL_NPSP_C) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95635 $(ALL_NPSP_O): CFLAGS += -Wno-array-bounds mrcal-pywrap.o: $(addsuffix .h,$(wildcard *.docstring)) mrcal/_mrcal$(PY_EXT_SUFFIX): mrcal-pywrap.o libmrcal.$(SO) libmrcal.$(SO).${ABI_VERSION} $(PY_MRBUILD_LINKER) $(PY_MRBUILD_LDFLAGS) $(LDFLAGS) $< -lmrcal -lsuitesparseconfig -o $@ CFLAGS += -I/usr/include/suitesparse PYTHON_OBJECTS := mrcal-pywrap.o $(ALL_NPSP_O) # In the python api I have to cast a PyCFunctionWithKeywords to a PyCFunction, # and the compiler complains. But that's how Python does it! So I tell the # compiler to chill $(PYTHON_OBJECTS): CFLAGS += -Wno-cast-function-type $(PYTHON_OBJECTS): CFLAGS += $(PY_MRBUILD_CFLAGS) # The python libraries (compiled ones and ones written in python) all live in # mrcal/ DIST_PY3_MODULES := mrcal all: mrcal/_mrcal$(PY_EXT_SUFFIX) $(ALL_NPSP_SO) EXTRA_CLEAN += mrcal/*.$(SO) include Makefile.doc include Makefile.tests include $(MRBUILD_MK)/Makefile.common.footer mrcal-2.4.1/Makefile.doc 0000664 0000000 0000000 00000011432 14563016623 0015027 0 ustar 00root root 0000000 0000000 # -*- Makefile -*- # To use the latest tag in the documentation, run # # VERSION_USE_LATEST_TAG=1 make ## I generate a single mrcal-python-api-reference.html for ALL the Python code. ## This is large DOC_HTML += doc/out/mrcal-python-api-reference.html doc/out/mrcal-python-api-reference.html: $(wildcard mrcal/*.py) $(patsubst %,mrcal/%$(PY_EXT_SUFFIX),$(ALL_PY_EXTENSION_MODULES)) libmrcal.so.$(ABI_VERSION) | doc/out/ python3 doc/pydoc.py -w mrcal > $@.tmp && mv $@.tmp $@ DOC_ALL_FIG := $(wildcard doc/*.fig) DOC_ALL_SVG_FROM_FIG := $(patsubst doc/%.fig,doc/out/figures/%.svg,$(DOC_ALL_FIG)) DOC_ALL_PDF_FROM_FIG := $(patsubst doc/%.fig,doc/out/figures/%.pdf,$(DOC_ALL_FIG)) doc: $(DOC_ALL_SVG_FROM_FIG) $(DOC_ALL_PDF_FROM_FIG) $(DOC_ALL_SVG_FROM_FIG): doc/out/figures/%.svg: doc/%.fig | doc/out/figures/ fig2dev -L svg $< $@ $(DOC_ALL_PDF_FROM_FIG): doc/out/figures/%.pdf: doc/%.fig | doc/out/figures/ fig2dev -L pdf $< $@ ## I'd like to place docs for each Python submodule in a separate .html (instead ## of a single huge mrcal-python-api-reference.html). Here's an attempt at at ## that. This works, but needs more effort: ## ## - top level mrcal.html is confused about what it contains. It has all of ## _mrcal and _poseutils for some reason ## - cross-submodule links don't work # # doc-reference: \ # $(patsubst mrcal/%.py,doc/mrcal.%.html,$(filter-out %/__init__.py,$(wildcard mrcal/*.py))) \ # $(patsubst %,doc/out/mrcal.%.html,$(ALL_PY_EXTENSION_MODULES)) \ # doc/out/mrcal.html # doc/out/mrcal.%.html: \ # mrcal/%.py \ # $(patsubst %,mrcal/%$(PY_EXT_SUFFIX),$(ALL_PY_EXTENSION_MODULES)) \ # libmrcal.so.$(ABI_VERSION) # doc/pydoc.py -w mrcal.$* > $@.tmp && mv $@.tmp $@ # doc/out/mrcal.%.html: mrcal/%$(PY_EXT_SUFFIX) # doc/pydoc.py -w mrcal.$* > $@.tmp && mv $@.tmp $@ # doc/out/mrcal.html: \ # $(wildcard mrcal/*.py) \ # $(patsubst %,mrcal/%$(PY_EXT_SUFFIX),$(ALL_PY_EXTENSION_MODULES)) \ # libmrcal.so.$(ABI_VERSION) # doc/pydoc.py -w mrcal > $@.tmp && mv $@.tmp $@ # .PHONY: doc-reference DOC_ALL_CSS := $(wildcard doc/*.css) DOC_ALL_CSS_TARGET := $(patsubst doc/%,doc/out/%,$(DOC_ALL_CSS)) doc: $(DOC_ALL_CSS_TARGET) $(DOC_ALL_CSS_TARGET): doc/out/%.css: doc/%.css | doc/out/ cp $< doc/out DOC_ALL_ORG := $(wildcard doc/*.org) DOC_ALL_HTML_TARGET := $(patsubst doc/%.org,doc/out/%.html,$(DOC_ALL_ORG)) DOC_HTML += $(DOC_ALL_HTML_TARGET) # This ONE command creates ALL the html files, so I want a pattern rule to indicate # that. I want to do: # %/out/a.html %/out/b.html %/out/c.html: %/a.org %/b.org %/c.org $(addprefix %,$(patsubst doc/%,/%,$(DOC_ALL_HTML_TARGET))): $(addprefix %,$(patsubst doc/%,/%,$(DOC_ALL_ORG))) emacs --chdir=doc -l mrcal-docs-publish.el --batch --eval '(load-library "org")' --eval '(org-publish-all t nil)' $(DOC_ALL_HTML_TARGET): doc/mrcal-docs-publish.el | doc/out/ $(DIST_MAN): %.1: %.pod pod2man --center="mrcal: camera projection, calibration toolkit" --name=MRCAL --release="mrcal $(VERSION)" --section=1 $< $@ %.pod: % $(MRBUILD_BIN)/make-pod-from-help $< > $@.tmp && cat footer.pod >> $@.tmp && mv $@.tmp $@ EXTRA_CLEAN += $(DIST_MAN) $(patsubst %.1,%.pod,$(DIST_MAN)) # I generate a manpage. Some perl stuff to add the html preamble MANPAGES_HTML := $(patsubst %,doc/out/%.html,$(DIST_BIN)) doc/out/%.html: %.pod | doc/out/ pod2html --noindex --css=mrcal.css --infile=$< | \ perl -ne 'BEGIN {$$h = `cat doc/mrcal-preamble-GENERATED.html`;} if(!/(.*
)(.*)/s) { print; } else { print "$$1 $$h $$2"; }' > $@.tmp && mv $@.tmp $@ DOC_HTML += $(MANPAGES_HTML) $(DOC_HTML): doc/mrcal-preamble-GENERATED.html # If the git HEAD moves, I regenerate the preamble. It contains a version string # that uses the git info. This still isn't complete. A new git tag SHOULD # trigger this to be regenerated, but it doesn't. I'll do that later doc/mrcal-preamble-GENERATED.html: doc/mrcal-preamble-TEMPLATE.html $(and $(wildcard .git),.git/$(shell cut -d' ' -f2 .git/HEAD)) < $< sed s/@@VERSION@@/$(VERSION)/g > $@.tmp && mv $@.tmp $@ EXTRA_CLEAN += doc/mrcal-preamble-GENERATED.html # Documentation uses the latest tag only to denote the version. This will only # work properly after a 'make clean' (If you "make", everything is built without # VERSION_USE_LATEST_TAG=1; then when you "make doc", the products built without # VERSION_USE_LATEST_TAG=1 are used). Beware doc: VERSION_USE_LATEST_TAG=1 doc: $(DOC_HTML) .PHONY: doc # the whole output documentation directory EXTRA_CLEAN += doc/out # Convenience rules. Don't blindly use these if you aren't Dima and you aren't # using his machine publish-doc: doc rsync --exclude '*~' --exclude external -avu doc/out/ mrcal.secretsauce.net:/var/www/mrcal/docs-latest-release publish-doc-external: rsync -avu ../mrcal-doc-external/ mrcal.secretsauce.net:/var/www/mrcal/docs-2.3/external .PHONY: publish-doc publish-doc-external mrcal-2.4.1/Makefile.tests 0000664 0000000 0000000 00000011767 14563016623 0015437 0 ustar 00root root 0000000 0000000 # -*- Makefile -*- # The test suite no longer runs in parallel, but it ALWAYS tries to run all the # tests, even without 'make -k' TESTS := \ test/test-pywrap-functions.py \ test/test-pylib-projections.py \ test/test-poseutils.py \ test/test-cameramodel.py \ test/test-poseutils-lib.py \ test/test-projections.py \ test/test-projection--special-cases.py__pinhole \ test/test-projection--special-cases.py__stereographic \ test/test-projection--special-cases.py__lonlat \ test/test-projection--special-cases.py__latlon \ test/test-gradients.py \ test/test-py-gradients.py \ test/test-cahvor \ test/test-optimizer-callback.py \ test/test-basic-sfm.py \ test/test-basic-calibration.py \ test/test-surveyed-calibration.py \ test/test-surveyed-calibration.py__--distance__8 \ test/test-surveyed-calibration.py__--oversample__10 \ test/test-surveyed-calibration.py__--do-sample \ test/test-surveyed-calibration.py__--do-sample__--distance__8 \ test/test-surveyed-calibration.py__--do-sample__--oversample__10 \ test/test-projection-uncertainty.py__--fixed__cam0__--model__opencv4__--do-sample \ test/test-projection-uncertainty.py__--fixed__frames__--model__opencv4__--do-sample__--Nsamples__200 \ test/test-projection-uncertainty.py__--fixed__cam0__--model__splined \ test/test-linearizations.py \ test/test-lensmodel-string-manipulation \ test/test-CHOLMOD-factorization.py \ test/test-projection-diff.py \ test/test-graft-models.py \ test/test-convert-lensmodel.py \ test/test-stereo.py \ test/test-solvepnp.py \ test/test-match-feature.py \ test/test-triangulation.py \ test/test-uncertainty-broadcasting.py__--q-calibration-stdev__0.3__--q-observation-stdev__0.3__--q-observation-stdev-correlation__0.5 \ test/test-parser-cameramodel \ test/test-extrinsics-moved-since-calibration.py \ test/test-broadcasting.py \ test/test-compute-chessboard-corners-parsing.py \ test/test-rectification-maps.py \ test/test-save-load-image.py \ test/test-worst_direction_stdev.py # triangulation-uncertainty tests. Lots and lots of tests to exhaustively try # out different scenarios BASE := test/test-triangulation-uncertainty.py__--do-sample__--model__opencv4__--observed-point__-40__0__200__--Ncameras__3__--cameras__2__1__--q-observation-stdev-correlation__0.6 TESTS_TRIANGULATION_UNCERTAINTY := \ $(foreach stabilization,__--stabilize-coords NONE, \ $(foreach fixed,cam0 frames, \ $(foreach only,observation calibration, \ $(BASE)$(if $(filter NONE,$(stabilization)),,$(stabilization))__--fixed__$(fixed)__--q-$(only)-stdev__0.5))) # Both sets of noise. Looking at different camera BASE := test/test-triangulation-uncertainty.py__--do-sample__--model__opencv4__--Ncameras__2__--cameras__1__0__--q-observation-stdev__0.5__--q-calibration-stdev__0.5 # Different amounts of correlation and near/far TESTS_TRIANGULATION_UNCERTAINTY := $(TESTS_TRIANGULATION_UNCERTAINTY) \ $(foreach stabilization,__--stabilize-coords NONE, \ $(foreach fixed,cam0 frames, \ $(foreach corr,0.1 0.9, \ $(foreach where,near far, \ $(BASE)$(if $(filter NONE,$(stabilization)),,$(stabilization))__--fixed__$(fixed)__--q-observation-stdev-correlation__$(corr)__--observed-point__$(if $(filter near,$(where)),-2__0__10,-40__0__200))))) TESTS_external_data := test/test-stereo-range.py # TESTS_TRIANGULATION_UNCERTAINTY not included in TESTS yet, so TESTS_NOSAMPLING # includes none of those TESTS_NOSAMPLING := $(foreach t,$(TESTS),$(if $(filter --do-sample,$(subst __, ,$t)),,$t)) TESTS := $(TESTS) $(TESTS_TRIANGULATION_UNCERTAINTY) # TESTS_external_data is not included in any other set, and must be requested # explicitly # Define the set for each "make test-xxx" command TESTS_all := $(TESTS) TESTS_nosampling := $(TESTS_NOSAMPLING) TESTS_triangulation_uncertainty := $(TESTS_TRIANGULATION_UNCERTAINTY) # "test-all" is the full set of tests # "test-nosampling" excludes the very time-consuming tests # "test-external-data" is everything that needs data other than what's in the repo TESTS_ALL_TARGETS := test-all test-nosampling test-triangulation-uncertainty test-external-data $(TESTS_ALL_TARGETS): all @FAILED=""; $(foreach t,$(TESTS_$(subst -,_,$(patsubst test-%,%,$@))),echo "========== RUNNING: $(subst __, ,$t)"; $(subst __, ,$t) || FAILED="$$FAILED $t"; ) test -z "$$FAILED" || echo "SOME TEST SETS FAILED: $$FAILED !"; test -z "$$FAILED" && echo "ALL TEST SETS PASSED!" .PHONY: $(TESTS_ALL_TARGETS) test: @echo "Which test set should we run? I know about '$(TESTS_ALL_TARGETS)'" >> /dev/stderr; false .PHONY: test mrcal-2.4.1/README.org 0000664 0000000 0000000 00000002260 14563016623 0014270 0 ustar 00root root 0000000 0000000 * SYNOPSIS #+BEGIN_EXAMPLE $ mrcal-calibrate-cameras --focal 2000 --outdir /tmp --object-spacing 0.01 --object-width-n 10 '/tmp/left*.png' '/tmp/right*.png' ... lots of output as the solve runs ... Wrote /tmp/camera0-0.cameramodel Wrote /tmp/camera0-1.cameramodel #+END_EXAMPLE And now we have a calibration! * SUMMARY =mrcal= is a generic toolkit built to solve the calibration and SFM-like problems we encounter at NASA/JPL. Functionality related to these problems is exposed as a set of C and Python libraries and some commandline tools. * DESCRIPTION Extensive documentation is available at http://mrcal.secretsauce.net/ * INSTALLATION Install and/or building instructions are on the [[http://mrcal.secretsauce.net/install.html]["Building or installing" page]]. * REPOSITORY https://www.github.com/dkogan/mrcal/ * AUTHOR Dima Kogan (=dima@secretsauce.net=) * LICENSE AND COPYRIGHT Copyright (c) 2017-2023 California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 mrcal-2.4.1/_rectification_maps.docstring 0000664 0000000 0000000 00000000266 14563016623 0020546 0 ustar 00root root 0000000 0000000 Construct image transformation maps to make rectified images This is an internal function. You probably want mrcal.rectification_maps(). See the docs for that function for details. mrcal-2.4.1/_rectified_resolution.docstring 0000664 0000000 0000000 00000000266 14563016623 0021124 0 ustar 00root root 0000000 0000000 Compute the resolution to be used for the rectified system This is an internal function. You probably want mrcal.rectified_resolution(). See the docs for that function for details. mrcal-2.4.1/_rectified_system.docstring 0000664 0000000 0000000 00000000247 14563016623 0020244 0 ustar 00root root 0000000 0000000 Build rectified models for stereo rectification This is an internal function. You probably want mrcal.rectified_system(). See the docs for that function for details. mrcal-2.4.1/analyses/ 0000775 0000000 0000000 00000000000 14563016623 0014441 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/dancing/ 0000775 0000000 0000000 00000000000 14563016623 0016044 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/dancing/dance-study.py 0000775 0000000 0000000 00000176600 14563016623 0020653 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Simulates different chessboard dances to find the best technique We want the shortest chessboard dances that produce the most confident results. In a perfect world would put the chessboard in all the locations where we would expect to use the visual system, since the best confidence is obtained in regions where the chessboard was observed. However, due to geometric constraints it is sometimes impossible to put the board in the right locations. This tool clearly shows that filling the field of view produces best results. But very wide lenses require huge chessboards, displayed very close to the lens in order to fill the field of view. This means that using a wide lens to look out to infinity will always result in potentially too little projection confidence. This tool is intended to find the kind of chessboard dance to get good confidences by simulating different geometries and dances. We arrange --Ncameras cameras horizontally, with an identity rotation, evenly spaced with a spacing of --camera-spacing meters. The left camera is at the origin. We show the cameras lots of dense chessboards ("dense" meaning that every camera sees all the points of all the chessboards). The chessboard come in two clusters: "near" and "far". Each cluster is centered straight ahead of the midpoint of all the cameras, with some random noise on the position and orientation. The distances from the center of the cameras to the center of the clusters are given by --range. This tool solves the calibration problem, and generates uncertainty-vs-range curves. Each run of this tool generates a family of this curve, for different values of Nframes-far, the numbers of chessboard observations in the "far" cluster. This tool scans some parameter (selected by --scan), and reports the uncertainty-vs-range for the different values of this parameter (as a plot and as an output vnl). If we don't want to scan any parameter, and just want a single uncertainty-vs-range plot, don't pass --scan. ''' import sys import argparse import re import os def parse_args(): def positive_float(string): try: value = float(string) except: raise argparse.ArgumentTypeError("argument MUST be a positive floating-point number. Got '{}'".format(string)) if value <= 0: raise argparse.ArgumentTypeError("argument MUST be a positive floating-point number. Got '{}'".format(string)) return value def positive_int(string): try: value = int(string) except: raise argparse.ArgumentTypeError("argument MUST be a positive integer. Got '{}'".format(string)) if value <= 0 or abs(value-float(string)) > 1e-6: raise argparse.ArgumentTypeError("argument MUST be a positive integer. Got '{}'".format(string)) return value parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('--icam-uncertainty', default = 0, type=int, help='''Which camera to use for the uncertainty reporting. I use the left-most one (camera 0) by default''') parser.add_argument('--camera-spacing', default = 0.3, type=float, help='How many meters between adjacent cameras in our synthetic world') parser.add_argument('--object-spacing', default="0.077", type=str, help='''Width of each square in the calibration board, in meters. If --scan object_width_n, this is used as the spacing for the SMALLEST width. As I change object_width_n, I adjust object_spacing accordingly, to keep the total board size constant. If --scan object_spacing, this is MIN,MAX to specify the bounds of the scan. In that case object_width_n stays constant, so the board changes size''') parser.add_argument('--object-width-n', type=str, help='''How many points the calibration board has per horizontal side. If both are omitted, we default the width and height to 10 (both must be specified to use a non-default value). If --scan object_width_n, this is MIN,MAX to specify the bounds of the scan. In that case I assume a square object, and ignore --object-height-n. Scanning object-width-n keeps the board size constant''') parser.add_argument('--object-height-n', type=int, help='''How many points the calibration board has per vertical side. If both are omitted, we default the width and height to 10 (both must be specified to use a non-default value). If --scan object_width_n, this is ignored, and set equivalent to the object width''') parser.add_argument('--observed-pixel-uncertainty', type=positive_float, default = 1.0, help='''The standard deviation of x and y pixel coordinates of the input observations I generate. The distribution of the inputs is gaussian, with the standard deviation specified by this argument. Note: this is the x and y standard deviation, treated independently. If each of these is s, then the LENGTH of the deviation of each pixel is a Rayleigh distribution with expected value s*sqrt(pi/2) ~ s*1.25''') parser.add_argument('--lensmodel', required=False, type=str, help='''Which lens model to use for the simulation. If omitted, we use the model given on the commandline. We may want to use a parametric model to generate data (model on the commandline), but a richer splined model to solve''') parser.add_argument('--skip-calobject-warp-solve', action='store_true', default=False, help='''By default we assume the calibration target is slightly deformed, and we compute this deformation. If we want to assume that the chessboard shape is fixed, pass this option. The actual shape of the board is given by --calobject-warp''') parser.add_argument('--calobject-warp', type=float, nargs=2, default=(0.002, -0.005), help='''The "calibration-object warp". These specify the flex of the chessboard. By default, the board is slightly warped (as is usually the case in real life). To use a perfectly flat board, specify "0 0" here''') parser.add_argument('--show-geometry-first-solve', action = 'store_true', help='''If given, display the camera, chessboard geometry after the first solve, and exit. Used for debugging''') parser.add_argument('--show-uncertainty-first-solve', action = 'store_true', help='''If given, display the uncertainty (at infinity) and observations after the first solve, and exit. Used for debugging''') parser.add_argument('--write-models-first-solve', action = 'store_true', help='''If given, write the solved models to disk after the first solve, and exit. Used for debugging. Useful to check fov_x_deg when solving for a splined model''') parser.add_argument('--which', choices=('all-cameras-must-see-full-board', 'some-cameras-must-see-full-board', 'all-cameras-must-see-half-board', 'some-cameras-must-see-half-board'), default='all-cameras-must-see-half-board', help='''What kind of random poses are accepted. Default is all-cameras-must-see-half-board''') parser.add_argument('--fixed-frames', action='store_true', help='''Optimize the geometry keeping the chessboard frames fixed. This reduces the freedom of the solution, and produces more confident calibrations. It's possible to use this in reality by surverying the chessboard poses''') parser.add_argument('--ymax', type=float, default = 10.0, help='''If given, use this as the upper extent of the uncertainty plot.''') parser.add_argument('--uncertainty-at-range-sampled-min', type=float, help='''If given, use this as the lower bound of the ranges we look at when evaluating projection confidence''') parser.add_argument('--uncertainty-at-range-sampled-max', type=float, help='''If given, use this as the upper bound of the ranges we look at when evaluating projection confidence''') parser.add_argument('--explore', action='store_true', help='''Drop into a REPL at the end''') parser.add_argument('--scan', type=str, default='', choices=('range', 'tilt_radius', 'Nframes', 'Ncameras', 'object_width_n', 'object_spacing', 'num_far_constant_Nframes_near', 'num_far_constant_Nframes_all'), help='''Study the effect of some parameter on uncertainty. The parameter is given as an argument ot this function. Valid choices: ('range','tilt_radius','Nframes','Ncameras', 'object_width_n', 'object_spacing', 'num_far_constant_Nframes_near', 'num_far_constant_Nframes_all'). Scanning object-width-n keeps the board size constant''') parser.add_argument('--scan-object-spacing-compensate-range-from', type=float, help=f'''Only applies if --scan object-spacing. By default we vary the object spacings without touching anything else: the chessboard grows in size as we increase the spacing. If given, we try to keep the apparent size constant: as the object spacing grows, we increase the range. The nominal range given in this argument''') parser.add_argument('--range', default = '0.5,4.0', type=str, help='''if --scan num_far_...: this is "NEAR,FAR"; specifying the near and far ranges to the chessboard to evaluate. if --scan range this is "MIN,MAX"; specifying the extents of the ranges to evaluate. Otherwise: this is RANGE, the one range of chessboards to evaluate''') parser.add_argument('--tilt-radius', default='30.', type=str, help='''The radius of the uniform distribution used to sample the pitch and yaw of the chessboard observations, in degrees. The default is 30, meaning that the chessboard pitch and yaw are sampled from [-30deg,30deg]. if --scan tilt_radius: this is TILT-MIN,TILT-MAX specifying the bounds of the two tilt-radius values to evaluate. Otherwise: this is the one value we use''') parser.add_argument('--roll-radius', type=float, default=20., help='''The synthetic chessboard orientation is sampled from a uniform distribution: [-RADIUS,RADIUS]. The pitch,yaw radius is specified by the --tilt-radius. The roll radius is selected here. "Roll" is the rotation around the axis normal to the chessboard plane. Default is 20deg''') parser.add_argument('--x-radius', type=float, default=None, help='''The synthetic chessboard position is sampled from a uniform distribution: [-RADIUS,RADIUS]. The x radius is selected here. "x" is direction in the chessboard plane, and is also the axis along which the cameras are distributed. A resonable default (possibly range-dependent) is chosen if omitted. MUST be None if --scan num_far_...''') parser.add_argument('--y-radius', type=float, default=None, help='''The synthetic chessboard position is sampled from a uniform distribution: [-RADIUS,RADIUS]. The y radius is selected here. "y" is direction in the chessboard plane, and is also normal to the axis along which the cameras are distributed. A resonable default (possibly range-dependent) is chosen if omitted. MUST be None if --scan num_far_...''') parser.add_argument('--z-radius', type=float, default=None, help='''The synthetic chessboard position is sampled from a uniform distribution: [-RADIUS,RADIUS]. The z radius is selected here. "z" is direction normal to the chessboard plane. A resonable default (possibly range-dependent) is chosen if omitted. MUST be None if --scan num_far_...''') parser.add_argument('--Ncameras', default = '1', type=str, help='''How many cameras oberve our synthetic world. By default we just have one camera. if --scan Ncameras: this is NCAMERAS-MIN,NCAMERAS-MAX specifying the bounds of the camera counts to evaluate. Otherwise: this is the one Ncameras value to use''') parser.add_argument('--Nframes', type=str, help='''How many observed frames we have. Ignored if --scan num_far_... if --scan Nframes: this is NFRAMES-MIN,NFRAMES-MAX specifying the bounds of the frame counts to evaluate. Otherwise: this is the one Nframes value to use''') parser.add_argument('--Nframes-near', type=positive_int, help='''Used if --scan num_far_constant_Nframes_near. The number of "near" frames is given by this argument, while we look at the effect of adding more "far" frames.''') parser.add_argument('--Nframes-all', type=positive_int, help='''Used if --scan num_far_constant_Nframes_all. The number of "near"+"far" frames is given by this argument, while we look at the effect of replacing "near" frames with "far" frames.''') parser.add_argument('--Nscan-samples', type=positive_int, default=8, help='''How many values of the parameter being scanned to evaluate. If we're scanning something (--scan ... given) then by default the scan evaluates 8 different values. Otherwise this is set to 1''') parser.add_argument('--hardcopy', help=f'''Filename to plot into. If omitted, we make an interactive plot. This is passed directly to gnuplotlib''') parser.add_argument('--terminal', help=f'''gnuplot terminal to use for the plots. This is passed directly to gnuplotlib. Omit this unless you know what you're doing''') parser.add_argument('--extratitle', help=f'''Extra title string to add to a plot''') parser.add_argument('--title', help=f'''Full title string to use in a plot. Overrides the default and --extratitle''') parser.add_argument('--set', type=str, action='append', help='''Extra 'set' directives to gnuplotlib. Can be given multiple times''') parser.add_argument('--unset', type=str, action='append', help='''Extra 'unset' directives to gnuplotlib. Can be given multiple times''') parser.add_argument('model', type = str, help='''Baseline camera model. I use the intrinsics from this model to generate synthetic data. We probably want the "true" model to not be too crazy, so this should probably by a parametric (not splined) model''') args = parser.parse_args() if args.title is not None and args.extratitle is not None: print("--title and --extratitle are mutually exclusive", file=sys.stderr) sys.exit(1) return args args = parse_args() if args.object_width_n is None and \ args.object_height_n is None: args.object_width_n = "10" args.object_height_n = 10 elif not ( args.object_width_n is not None and \ args.object_height_n is not None) and \ args.scan != 'object_width_n': raise Exception("Either --object-width-n or --object-height-n are given: you must pass both or neither") # arg-parsing is done before the imports so that --help works without building # stuff, so that I can generate the manpages and README import numpy as np import numpysane as nps import gnuplotlib as gp import copy import os.path # I import the LOCAL mrcal scriptdir = os.path.dirname(os.path.realpath(__file__)) sys.path[:0] = f"{scriptdir}/../..", import mrcal def split_list(s, t): r'''Splits a comma-separated list None -> None "A,B,C" -> (A,B,C) "A" -> A ''' if s is None: return None,0 l = [t(x) for x in s.split(',')] if len(l) == 1: return l[0],1 return l,len(l) def first(s): r'''first in a list, or the value if a scalar''' if hasattr(s, '__iter__'): return s[0] return s def last(s): r'''last in a list, or the value if a scalar''' if hasattr(s, '__iter__'): return s[-1] return s controllable_args = \ dict( tilt_radius = dict(_type = float), range = dict(_type = float), Ncameras = dict(_type = int), Nframes = dict(_type = int), object_spacing = dict(_type = float), object_width_n = dict(_type = int) ) for a in controllable_args.keys(): l,n = split_list(getattr(args, a), controllable_args[a]['_type']) controllable_args[a]['value'] = l controllable_args[a]['listlen'] = n if any( controllable_args[a]['listlen'] > 2 for a in controllable_args.keys() ): raise Exception(f"All controllable args must have either at most 2 values. {a} has {controllable_args[a]['listlen']}") controllable_arg_0values = [ a for a in controllable_args.keys() if controllable_args[a]['listlen'] == 0 ] controllable_arg_2values = [ a for a in controllable_args.keys() if controllable_args[a]['listlen'] == 2 ] if len(controllable_arg_2values) == 0: controllable_arg_2values = '' elif len(controllable_arg_2values) == 1: controllable_arg_2values = controllable_arg_2values[0] else: raise Exception(f"At most 1 controllable arg may have 2 values. Instead I saw: {controllable_arg_2values}") if re.match("num_far_constant_Nframes_", args.scan): if args.x_radius is not None or \ args.y_radius is not None or \ args.z_radius is not None: raise Exception("--x-radius and --y-radius and --z-radius are exclusive with --scan num_far_...") # special case if 'Nframes' not in controllable_arg_0values: raise Exception(f"I'm scanning '{args.scan}', so --Nframes must not have been given") if 'range' != controllable_arg_2values: raise Exception(f"I'm scanning '{args.scan}', so --range must have 2 values") if args.scan == "num_far_constant_Nframes_near": if args.Nframes_all is not None: raise Exception(f"I'm scanning '{args.scan}', so --Nframes-all must not have have been given") if args.Nframes_near is None: raise Exception(f"I'm scanning '{args.scan}', so --Nframes-near must have have been given") else: if args.Nframes_near is not None: raise Exception(f"I'm scanning '{args.scan}', so --Nframes-near must not have have been given") if args.Nframes_all is None: raise Exception(f"I'm scanning '{args.scan}', so --Nframes-all must have have been given") else: if args.scan != controllable_arg_2values: # This covers the scanning-nothing-no2value-anything case raise Exception(f"I'm scanning '{args.scan}', the arg given 2 values is '{controllable_arg_2values}'. They must match") if len(controllable_arg_0values): raise Exception(f"I'm scanning '{args.scan}', so all controllable args should have some value. Missing: '{controllable_arg_0values}") if args.scan == '': args.Nscan_samples = 1 Nuncertainty_at_range_samples = 80 uncertainty_at_range_sampled_min = args.uncertainty_at_range_sampled_min if uncertainty_at_range_sampled_min is None: uncertainty_at_range_sampled_min = first(controllable_args['range']['value'])/10. uncertainty_at_range_sampled_max = args.uncertainty_at_range_sampled_max if uncertainty_at_range_sampled_max is None: uncertainty_at_range_sampled_max = last(controllable_args['range']['value']) *10. uncertainty_at_range_samples = \ np.logspace( np.log10(uncertainty_at_range_sampled_min), np.log10(uncertainty_at_range_sampled_max), Nuncertainty_at_range_samples) # I want the RNG to be deterministic np.random.seed(0) model_intrinsics = mrcal.cameramodel(args.model) calobject_warp_true_ref = np.array(args.calobject_warp) def solve(Ncameras, Nframes_near, Nframes_far, object_spacing, models_true, # q.shape = (Nframes, Ncameras, object_height, object_width, 2) # Rt_ref_board.shape = (Nframes, 4,3) q_true_near, Rt_ref_board_true_near, q_true_far, Rt_ref_board_true_far, fixed_frames = args.fixed_frames): q_true_near = q_true_near [:Nframes_near] Rt_ref_board_true_near = Rt_ref_board_true_near[:Nframes_near] if q_true_far is not None: q_true_far = q_true_far [:Nframes_far ] Rt_ref_board_true_far = Rt_ref_board_true_far [:Nframes_far ] else: q_true_far = np.zeros( (0,) + q_true_near.shape[1:], dtype = q_true_near.dtype) Rt_ref_board_true_far = np.zeros( (0,) + Rt_ref_board_true_near.shape[1:], dtype = Rt_ref_board_true_near.dtype) calobject_warp_true = calobject_warp_true_ref.copy() Nframes_all = Nframes_near + Nframes_far Rt_ref_board_true = nps.glue( Rt_ref_board_true_near, Rt_ref_board_true_far, axis = -3 ) # Dense observations. All the cameras see all the boards indices_frame_camera = np.zeros( (Nframes_all*Ncameras, 2), dtype=np.int32) indices_frame = indices_frame_camera[:,0].reshape(Nframes_all,Ncameras) indices_frame.setfield(nps.outer(np.arange(Nframes_all, dtype=np.int32), np.ones((Ncameras,), dtype=np.int32)), dtype = np.int32) indices_camera = indices_frame_camera[:,1].reshape(Nframes_all,Ncameras) indices_camera.setfield(nps.outer(np.ones((Nframes_all,), dtype=np.int32), np.arange(Ncameras, dtype=np.int32)), dtype = np.int32) indices_frame_camintrinsics_camextrinsics = \ nps.glue(indices_frame_camera, indices_frame_camera[:,(1,)], axis=-1) # If not fixed_frames: we use camera0 as the reference cordinate system, and # we allow the chessboard poses to move around. Else: the reference # coordinate system is arbitrary, but all cameras are allowed to move # around. The chessboards poses are fixed if not fixed_frames: indices_frame_camintrinsics_camextrinsics[:,2] -= 1 q = nps.glue( q_true_near, q_true_far, axis = -5 ) # apply noise q += np.random.randn(*q.shape) * args.observed_pixel_uncertainty # The observations are dense (in the data every camera sees all the # chessboards), but some of the observations WILL be out of bounds. I # pre-mark those as outliers so that the solve doesn't do weird stuff # Set the weights to 1 initially # shape (Nframes, Ncameras, object_height_n, object_width_n, 3) observations = nps.glue(q, np.ones( q.shape[:-1] + (1,) ), axis = -1) # shape (Ncameras, 1, 1, 2) imagersizes = nps.mv( nps.cat(*[ m.imagersize() for m in models_true ]), -2, -4 ) # mark the out-of-view observations as outliers observations[ np.any( q < 0, axis=-1 ), 2 ] = -1. observations[ np.any( q-imagersizes >= 0, axis=-1 ), 2 ] = -1. # shape (Nobservations, Nh, Nw, 2) observations = nps.clump( observations, n = 2 ) intrinsics = nps.cat( *[m.intrinsics()[1] for m in models_true] ) # If not fixed_frames: we use camera0 as the reference cordinate system, and # we allow the chessboard poses to move around. Else: the reference # coordinate system is arbitrary, but all cameras are allowed to move # around. The chessboards poses are fixed if fixed_frames: extrinsics = nps.cat( *[m.extrinsics_rt_fromref() for m in models_true] ) else: extrinsics = nps.cat( *[m.extrinsics_rt_fromref() for m in models_true[1:]] ) if len(extrinsics) == 0: extrinsics = None if nps.norm2(models_true[0].extrinsics_rt_fromref()) > 1e-6: raise Exception("models_true[0] must sit at the origin") imagersizes = nps.cat( *[m.imagersize() for m in models_true] ) optimization_inputs = \ dict( # intrinsics filled in later extrinsics_rt_fromref = extrinsics, frames_rt_toref = mrcal.rt_from_Rt(Rt_ref_board_true), points = None, observations_board = observations, indices_frame_camintrinsics_camextrinsics = indices_frame_camintrinsics_camextrinsics, observations_point = None, indices_point_camintrinsics_camextrinsics = None, # lensmodel filled in later calobject_warp = copy.deepcopy(calobject_warp_true), imagersizes = imagersizes, calibration_object_spacing = object_spacing, verbose = False, # do_optimize_extrinsics filled in later # do_optimize_intrinsics_core filled in later do_optimize_frames = False, do_optimize_intrinsics_distortions = True, do_optimize_calobject_warp = False, # turn this on, and reoptimize later, if needed do_apply_regularization = True, do_apply_outlier_rejection = False) if args.lensmodel is None: lensmodel = model_intrinsics.intrinsics()[0] else: lensmodel = args.lensmodel Nintrinsics = mrcal.lensmodel_num_params(lensmodel) if re.search("SPLINED", lensmodel): # These are already mostly right, So I lock them down while I seed the # intrinsics optimization_inputs['do_optimize_extrinsics'] = False # I pre-optimize the core, and then lock it down optimization_inputs['lensmodel'] = 'LENSMODEL_STEREOGRAPHIC' optimization_inputs['intrinsics'] = intrinsics[:,:4].copy() optimization_inputs['do_optimize_intrinsics_core'] = True stats = mrcal.optimize(**optimization_inputs) print(f"## optimized. rms = {stats['rms_reproj_error__pixels']}", file=sys.stderr) # core is good. Lock that down, and get an estimate for the control # points optimization_inputs['do_optimize_intrinsics_core'] = False optimization_inputs['lensmodel'] = lensmodel optimization_inputs['intrinsics'] = nps.glue(optimization_inputs['intrinsics'], np.zeros((Ncameras,Nintrinsics-4),),axis=-1) stats = mrcal.optimize(**optimization_inputs) print(f"## optimized. rms = {stats['rms_reproj_error__pixels']}", file=sys.stderr) # Ready for a final reoptimization with the geometry optimization_inputs['do_optimize_extrinsics'] = True if not fixed_frames: optimization_inputs['do_optimize_frames'] = True else: optimization_inputs['lensmodel'] = lensmodel if not mrcal.lensmodel_metadata_and_config(lensmodel)['has_core'] or \ not mrcal.lensmodel_metadata_and_config(model_intrinsics.intrinsics()[0])['has_core']: raise Exception("I'm assuming all the models here have a core. It's just lazy coding. If you see this, feel free to fix.") if lensmodel == model_intrinsics.intrinsics()[0]: # Same model. Grab the intrinsics. They're 99% right optimization_inputs['intrinsics'] = intrinsics.copy() else: # Different model. Grab the intrinsics core, optimize the rest optimization_inputs['intrinsics'] = nps.glue(intrinsics[:,:4], np.zeros((Ncameras,Nintrinsics-4),),axis=-1) optimization_inputs['do_optimize_intrinsics_core'] = True optimization_inputs['do_optimize_extrinsics'] = True if not fixed_frames: optimization_inputs['do_optimize_frames'] = True stats = mrcal.optimize(**optimization_inputs) print(f"## optimized. rms = {stats['rms_reproj_error__pixels']}", file=sys.stderr) if not args.skip_calobject_warp_solve: optimization_inputs['do_optimize_calobject_warp'] = True stats = mrcal.optimize(**optimization_inputs) print(f"## optimized. rms = {stats['rms_reproj_error__pixels']}", file=sys.stderr) return optimization_inputs def observation_centroid(optimization_inputs, icam): r'''mean pixel coordinate of all non-outlier points seen by a given camera''' ifcice = optimization_inputs['indices_frame_camintrinsics_camextrinsics'] observations = optimization_inputs['observations_board'] # pick the camera I want observations = observations[ifcice[:,1] == icam] # ignore outliers q = observations[ (observations[...,2] > 0), :2] return np.mean(q, axis=-2) def eval_one_rangenear_tilt(models_true, range_near, range_far, tilt_radius, object_width_n, object_height_n, object_spacing, uncertainty_at_range_samples, Ncameras, Nframes_near_samples, Nframes_far_samples): # I want the RNG to be deterministic np.random.seed(0) uncertainties = np.zeros((len(Nframes_far_samples), len(uncertainty_at_range_samples)), dtype=float) radius_cameras = (args.camera_spacing * (Ncameras-1)) / 2. x_radius = args.x_radius if args.x_radius is not None else range_near*2. + radius_cameras y_radius = args.y_radius if args.y_radius is not None else range_near*2. z_radius = args.z_radius if args.z_radius is not None else range_near/10. # shapes (Nframes, Ncameras, Nh, Nw, 2), # (Nframes, 4,3) q_true_near, Rt_ref_board_true_near = \ mrcal.synthesize_board_observations(models_true, object_width_n = object_width_n, object_height_n = object_height_n, object_spacing = object_spacing, calobject_warp = calobject_warp_true_ref, rt_ref_boardcenter = np.array((0., 0., 0., radius_cameras, 0, range_near,)), rt_ref_boardcenter__noiseradius = \ np.array((np.pi/180. * tilt_radius, np.pi/180. * tilt_radius, np.pi/180. * args.roll_radius, x_radius, y_radius, z_radius)), Nframes = np.max(Nframes_near_samples), which = args.which) if range_far is not None: q_true_far, Rt_ref_board_true_far = \ mrcal.synthesize_board_observations(models_true, object_width_n = object_width_n, object_height_n = object_height_n, object_spacing = object_spacing, calobject_warp = calobject_warp_true_ref, rt_ref_boardcenter = np.array((0., 0., 0., radius_cameras, 0, range_far,)), rt_ref_boardcenter__noiseradius = \ np.array((np.pi/180. * tilt_radius, np.pi/180. * tilt_radius, np.pi/180. * args.roll_radius, range_far*2. + radius_cameras, range_far*2., range_far/10.)), Nframes = np.max(Nframes_far_samples), which = args.which) else: q_true_far = None Rt_ref_board_true_far = None for i_Nframes_far in range(len(Nframes_far_samples)): Nframes_far = Nframes_far_samples [i_Nframes_far] Nframes_near = Nframes_near_samples[i_Nframes_far] optimization_inputs = solve(Ncameras, Nframes_near, Nframes_far, object_spacing, models_true, q_true_near, Rt_ref_board_true_near, q_true_far, Rt_ref_board_true_far) models_out = \ [ mrcal.cameramodel( optimization_inputs = optimization_inputs, icam_intrinsics = icam ) \ for icam in range(Ncameras) ] model = models_out[args.icam_uncertainty] if args.show_geometry_first_solve: mrcal.show_geometry(models_out, show_calobjects = True, wait = True) sys.exit() if args.show_uncertainty_first_solve: mrcal.show_projection_uncertainty(model, observations= True, wait = True) sys.exit() if args.write_models_first_solve: for i in range(len(models_out)): f = f"/tmp/camera{i}.cameramodel" if os.path.exists(f): input(f"File {f} already exists, and I want to overwrite it. Press enter to overwrite. Ctrl-c to exit") models_out[i].write(f) print(f"Wrote {f}") sys.exit() # shape (N,3) # I sample the center of the imager pcam_samples = \ mrcal.unproject( observation_centroid(optimization_inputs, args.icam_uncertainty), *model.intrinsics(), normalize = True) * \ nps.dummy(uncertainty_at_range_samples, -1) uncertainties[i_Nframes_far] = \ mrcal.projection_uncertainty(pcam_samples, model, what='worstdirection-stdev') return uncertainties output_table_legend = 'range_uncertainty_sample Nframes_near Nframes_far Ncameras range_near range_far tilt_radius object_width_n object_spacing uncertainty' output_table_fmt = '%f %d %d %d %f %f %f %d %f %f' output_table_icol__range_uncertainty_sample = 0 output_table_icol__Nframes_near = 1 output_table_icol__Nframes_far = 2 output_table_icol__Ncameras = 3 output_table_icol__range_near = 4 output_table_icol__range_far = 5 output_table_icol__tilt_radius = 6 output_table_icol__object_width_n = 7 output_table_icol__object_spacing = 8 output_table_icol__uncertainty = 9 output_table_Ncols = 10 output_table = np.zeros( (args.Nscan_samples, Nuncertainty_at_range_samples, output_table_Ncols), dtype=float) output_table[:,:, output_table_icol__range_uncertainty_sample] += uncertainty_at_range_samples if re.match("num_far_constant_Nframes_", args.scan): Nfar_samples = args.Nscan_samples if args.scan == "num_far_constant_Nframes_near": Nframes_far_samples = np.linspace(0, args.Nframes_near//4, Nfar_samples, dtype=int) Nframes_near_samples = Nframes_far_samples*0 + args.Nframes_near else: Nframes_far_samples = np.linspace(0, args.Nframes_all, Nfar_samples, dtype=int) Nframes_near_samples = args.Nframes_all - Nframes_far_samples models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] # shape (args.Nscan_samples, Nuncertainty_at_range_samples) output_table[:,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, *controllable_args['range']['value'], controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples) output_table[:,:, output_table_icol__Nframes_near] += nps.transpose(Nframes_near_samples) output_table[:,:, output_table_icol__Nframes_far] += nps.transpose(Nframes_far_samples) output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'][0] output_table[:,:, output_table_icol__range_far] = controllable_args['range']['value'][1] output_table[:,:, output_table_icol__tilt_radius ] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = Nframes_far_samples elif args.scan == "range": Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] Nrange_samples = args.Nscan_samples range_samples = np.linspace(*controllable_args['range']['value'], Nrange_samples, dtype=float) for i_range in range(Nrange_samples): output_table[i_range,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, range_samples[i_range], None, controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] += nps.transpose(range_samples) output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius ] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = range_samples elif args.scan == "tilt_radius": Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] Ntilt_rad_samples = args.Nscan_samples tilt_rad_samples = np.linspace(*controllable_args['tilt_radius']['value'], Ntilt_rad_samples, dtype=float) for i_tilt in range(Ntilt_rad_samples): output_table[i_tilt,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, controllable_args['range']['value'], None, tilt_rad_samples[i_tilt], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] += nps.transpose(tilt_rad_samples) output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = tilt_rad_samples elif args.scan == "Ncameras": Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) N_Ncameras_samples = args.Nscan_samples Ncameras_samples = np.linspace(*controllable_args['Ncameras']['value'], N_Ncameras_samples, dtype=int) for i_Ncameras in range(N_Ncameras_samples): Ncameras = Ncameras_samples[i_Ncameras] models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(Ncameras) ] output_table[i_Ncameras,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, controllable_args['range']['value'], None, controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, Ncameras, Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] += nps.transpose(Ncameras_samples) output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = Ncameras_samples elif args.scan == "Nframes": Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] N_Nframes_samples = args.Nscan_samples Nframes_samples = np.linspace(*controllable_args['Nframes']['value'], N_Nframes_samples, dtype=int) for i_Nframes in range(N_Nframes_samples): Nframes = Nframes_samples[i_Nframes] Nframes_near_samples = np.array( (Nframes,), dtype=int) output_table[i_Nframes,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, controllable_args['range']['value'], None, controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near]+= nps.transpose(Nframes_samples) output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = Nframes_samples elif args.scan == "object_width_n": Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] Nsamples = args.Nscan_samples samples = np.linspace(*controllable_args['object_width_n']['value'], Nsamples, dtype=int) # As I move the width, I adjust the spacing to keep the total board size # constant. The object spacing in the argument applies to the MIN value of # the object_width_n. W = (controllable_args['object_width_n']['value'][0]-1) * controllable_args['object_spacing']['value'] object_spacing = W / samples for i_sample in range(Nsamples): output_table[i_sample,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, controllable_args['range']['value'], None, controllable_args['tilt_radius']['value'], samples[i_sample], samples[i_sample], object_spacing[i_sample], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ]+= nps.transpose(samples) output_table[:,:, output_table_icol__object_spacing ]+= nps.transpose(object_spacing) elif args.scan == "object_spacing": Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] Nsamples = args.Nscan_samples samples = np.linspace(*controllable_args['object_spacing']['value'], Nsamples, dtype=float) # As I move the spacing, I leave object_width_n, letting the total board # size change. The object spacing in the argument applies to the MIN value # of the object_width_n. for i_sample in range(Nsamples): r = controllable_args['range']['value'] if args.scan_object_spacing_compensate_range_from: r *= samples[i_sample]/args.scan_object_spacing_compensate_range_from output_table[i_sample,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, r, None, controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, samples[i_sample], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ]+= nps.transpose(samples) if args.scan_object_spacing_compensate_range_from: output_table[:,:, output_table_icol__range_near] += controllable_args['range']['value'] * nps.transpose(samples/samples[0]) else: output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] else: # no --scan. We just want one sample Nframes_near_samples = np.array( (controllable_args['Nframes']['value'],), dtype=int) Nframes_far_samples = np.array( (0,), dtype=int) models_true = \ [ mrcal.cameramodel(intrinsics = model_intrinsics.intrinsics(), imagersize = model_intrinsics.imagersize(), extrinsics_rt_toref = np.array((0,0,0, i*args.camera_spacing, 0,0), dtype=float) ) \ for i in range(controllable_args['Ncameras']['value']) ] output_table[0,:, output_table_icol__uncertainty] = \ eval_one_rangenear_tilt(models_true, controllable_args['range']['value'], None, controllable_args['tilt_radius']['value'], controllable_args['object_width_n']['value'], args.object_height_n, controllable_args['object_spacing']['value'], uncertainty_at_range_samples, controllable_args['Ncameras']['value'], Nframes_near_samples, Nframes_far_samples)[0] output_table[:,:, output_table_icol__Nframes_near] = controllable_args['Nframes']['value'] output_table[:,:, output_table_icol__Nframes_far] = 0 output_table[:,:, output_table_icol__Ncameras] = controllable_args['Ncameras']['value'] output_table[:,:, output_table_icol__range_near] = controllable_args['range']['value'] output_table[:,:, output_table_icol__range_far] = -1 output_table[:,:, output_table_icol__tilt_radius] = controllable_args['tilt_radius']['value'] output_table[:,:, output_table_icol__object_width_n ] = controllable_args['object_width_n']['value'] output_table[:,:, output_table_icol__object_spacing ] = controllable_args['object_spacing']['value'] samples = None if isinstance(controllable_args['range']['value'], float): guides = [ f"arrow nohead dashtype 3 from {controllable_args['range']['value']},graph 0 to {controllable_args['range']['value']},graph 1" ] else: guides = [ f"arrow nohead dashtype 3 from {r},graph 0 to {r},graph 1" for r in controllable_args['range']['value'] ] guides.append(f"arrow nohead dashtype 3 from graph 0,first {args.observed_pixel_uncertainty} to graph 1,first {args.observed_pixel_uncertainty}") title = args.title if args.scan == "num_far_constant_Nframes_near": if title is None: title = f"Scanning 'far' observations added to a set of 'near' observations. Have {controllable_args['Ncameras']['value']} cameras, {args.Nframes_near} 'near' observations, at ranges {controllable_args['range']['value']}." legend_what = 'Nframes_far' elif args.scan == "num_far_constant_Nframes_all": if title is None: title = f"Scanning 'far' observations replacing 'near' observations. Have {controllable_args['Ncameras']['value']} cameras, {args.Nframes_all} total observations, at ranges {controllable_args['range']['value']}." legend_what = 'Nframes_far' elif args.scan == "Nframes": if title is None: title = f"Scanning Nframes. Have {controllable_args['Ncameras']['value']} cameras looking out at {controllable_args['range']['value']:.2f}m." legend_what = 'Nframes' elif args.scan == "Ncameras": if title is None: title = f"Scanning Ncameras. Observing {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m." legend_what = 'Ncameras' elif args.scan == "range": if title is None: title = f"Scanning the distance to observations. Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards." legend_what = 'Range-to-chessboards' elif args.scan == "tilt_radius": if title is None: title = f"Scanning the board tilt. Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m" legend_what = 'Random chessboard tilt radius' elif args.scan == "object_width_n": if title is None: title = f"Scanning the calibration object density, keeping the board size constant. Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m" legend_what = 'Number of chessboard points per side' elif args.scan == "object_spacing": if title is None: if args.scan_object_spacing_compensate_range_from: title = f"Scanning the calibration object spacing, keeping the point count constant, and letting the board grow. Range grows with spacing. Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m" else: title = f"Scanning the calibration object spacing, keeping the point count constant, and letting the board grow. Range is constant. Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m" legend_what = 'Distance between adjacent chessboard corners' else: # no --scan. We just want one sample if title is None: title = f"Have {controllable_args['Ncameras']['value']} cameras looking at {controllable_args['Nframes']['value']} boards at {controllable_args['range']['value']:.2f}m with tilt radius {controllable_args['tilt_radius']['value']}" if args.extratitle is not None: title = f"{title}: {args.extratitle}" if samples is None: legend = None elif samples.dtype.kind == 'i': legend = np.array([ f"{legend_what} = {x}" for x in samples]) else: legend = np.array([ f"{legend_what} = {x:.2f}" for x in samples]) np.savetxt(sys.stdout, nps.clump(output_table, n=2), fmt = output_table_fmt, header= output_table_legend) plotoptions = \ dict( yrange = (0, args.ymax), _with = 'lines', _set = guides, unset = 'grid', title = title, xlabel = 'Range (m)', ylabel = 'Expected worst-direction uncertainty (pixels)', hardcopy = args.hardcopy, terminal = args.terminal, wait = not args.explore and args.hardcopy is None) if legend is not None: plotoptions['legend'] = legend if args.set: gp.add_plot_option(plotoptions, _set = args.set) if args.unset: gp.add_plot_option(plotoptions, _unset = args.unset) gp.plot(uncertainty_at_range_samples, output_table[:,:, output_table_icol__uncertainty], **plotoptions) if args.explore: import IPython IPython.embed() sys.exit() mrcal-2.4.1/analyses/extrinsics-stability.py 0000775 0000000 0000000 00000012351 14563016623 0021215 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 import sys import numpy as np import numpysane as nps import gnuplotlib as gp import mrcal def compute_Rt_implied_01(*models): lensmodels = [model.intrinsics()[0] for model in models] intrinsics_data = [model.intrinsics()[1] for model in models] # v shape (...,Ncameras,Nheight,Nwidth,...) # q0 shape (..., Nheight,Nwidth,...) v,q0 = \ mrcal.sample_imager_unproject(60, None, *models[0].imagersize(), lensmodels, intrinsics_data, normalize = True) distance = (1.0, 100.) focus_center = None focus_radius = 500 if distance is None: atinfinity = True distance = np.ones((1,), dtype=float) else: atinfinity = False distance = nps.atleast_dims(np.array(distance), -1) distance = nps.mv(distance.ravel(), -1,-4) if focus_center is None: focus_center = (models[0].imagersize() - 1.)/2. implied_Rt10 = \ mrcal.implied_Rt10__from_unprojections(q0, # shape (len(distance),Nheight,Nwidth,3) v[0,...] * distance, v[1,...], atinfinity = atinfinity, focus_center = focus_center, focus_radius = focus_radius) return mrcal.invert_Rt(implied_Rt10) models_filenames = sys.argv[1:5] models = [mrcal.cameramodel(f) for f in models_filenames] pairs = ( ( models[0],models[1]), ( models[2],models[3]) ) # The "before" extrinsic transformation m0,m1 = pairs[0] Rt01 = mrcal.compose_Rt( m0.extrinsics_Rt_fromref(), m1.extrinsics_Rt_toref()) # The "after" extrinsics transformation. I remap both cameras into the "before" # space, so that we're looking at the extrinsics transformation in the "before" # coord system. This will allow us to compare before and after # # Rt_implied__0before_0after Rt_0after_1after Rt_implied__1after_1before Rt_implied__0before_0after = compute_Rt_implied_01(pairs[0][0], pairs[1][0]) Rt_implied__1after_1before = compute_Rt_implied_01(pairs[1][1], pairs[0][1]) m0,m1 = pairs[1] Rt_0after_1after = \ mrcal.compose_Rt( m0.extrinsics_Rt_fromref(), m1.extrinsics_Rt_toref()) Rt01_after_extrinsicsbefore = \ mrcal.compose_Rt( Rt_implied__0before_0after, Rt_0after_1after, Rt_implied__1after_1before ) # I have the two relative transforms. If camera0 is fixed, how much am I moving # camera1? Rt_1before_1after = mrcal.compose_Rt(mrcal.invert_Rt(Rt01), Rt01_after_extrinsicsbefore) r_1before_1after = mrcal.r_from_R(Rt_1before_1after[:3,:]) t_1before_1after = Rt_1before_1after[3,:] magnitude = nps.mag(t_1before_1after) direction = t_1before_1after/magnitude angle = nps.mag(r_1before_1after) axis = r_1before_1after/angle angle_deg = angle*180./np.pi np.set_printoptions(precision=2) print(f"translation: {magnitude*1000:.2f}mm in the direction {direction}") print(f"rotation: {angle_deg:.3f}deg around the axis {axis}") for i in range(len(models)): m = models[i] qcenter,dq_dv,_ = mrcal.project(np.array((0,0,1.)), *m.intrinsics(), get_gradients=True) # I now do a simple thing. I have v=[0,0,1] so dq_dv[:,2]=0. A small pitch # gives me dv = (0,sinth,costh) ~ (0,th,1). So dq = dq_dv[:,1]*th + # dq_dv[:,2] = dq_dv[:,1]*th so for a pitch: mag(dq/dth) = mag(dq_dv[:,1]). # Similarly for a yaw I have mag(dq_dv[:,0]). I find the worst one, and call # it good. I can do that because dq_dv will be diagonally dominant, and the # diagonal elements will be very similar. mrcal.rectified_resolution() does # this resolution__pix_per_rad = np.max(nps.transpose(nps.mag(dq_dv[:,:2]))) resolution__pix_per_deg = resolution__pix_per_rad * np.pi/180. if 0: # More complicated, but probably not better. And not completed # # As the camera rotates, v shifts: rotate(v,r) ~ v + dv/dr dr, so the # projection shifts to q + dq/dv dv = q + dq/dv dv/dr dr # # Rodrigues rotation formula. th = mag(r), axis = normalize(r) = r/th # # rotate(r,v) = v cos(th) + cross(axis, v) sin(th) + axis axist v (1 - cos(th)) # # If th is small: # # rotate(r,v) = v + cross(axis, v) th # = v + [ axis1*v2-axis2*v1 axis2*v0-axis0*v2 axis0*v1-axis1*v0] th # # v = [0,0,1] so # # rotate(r,v) = v + [ axis1 axis0 0] th # = v + [ r1 r0 0 ] # # So dv_dr = np.array(((0,1,0), (1,0,0), (0,0,0))) dq_dr = nps.matmult(dq_dv,dv_dr) # I have dq_dr2 = 0, so lets ignore it dq_dr01 = dq_dr[:,:2] # Can finish this now print(f"Camera {i} has a resolution of {1./resolution__pix_per_deg:.3f} degrees per pixel at the center") mrcal-2.4.1/analyses/mrcal-reoptimize 0000775 0000000 0000000 00000013643 14563016623 0017661 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Loads a model, and re-runs the optimization problem used to generate it This is useful to analyze the solve. We can generate perfect chessboard observations, corrupted with perfect nominal noise to validate the idea that differences observed with mrcal-show-projection-diff should be predictive by the uncertainties reported by mrcal-show-projection-uncertainty IF the dominant source of error is calibration-time sampling error. By default we write the resulting model to disk into a file INPUTMODEL-reoptimized.cameramodel. If --explore, we instead drop into a REPL. ''' import sys import argparse import re import os def parse_args(): parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('--model-intrinsics', help='''By default, all the nominal data comes from the MODEL given in the positional argument. If --model-intrinsics is given, the intrinsics only come from this other model. These are applied only to the ONE model in icam_intrinsics''') parser.add_argument('--perfect', action= 'store_true', help='''Make perfect observations and add perfect noise''') parser.add_argument('--verbose', action = 'store_true', help='''If given, reoptimize verbosely''') parser.add_argument('--skip-outlier-rejection', action='store_true', help='''Reoptimize with no outlier rejection''') parser.add_argument('--revive-outliers', action='store_true', help='''Un-mark the outliers''') parser.add_argument('--force', '-f', action='store_true', default=False, help='''By default existing models on disk are not overwritten. Pass --force to overwrite them without complaint''') parser.add_argument('--outdir', type=lambda d: d if os.path.isdir(d) else \ parser.error(f"--outdir requires an existing directory as the arg, but got '{d}'"), help='''Directory to write the output into. If omitted, we use the directory of the input model''') parser.add_argument('--explore', action = 'store_true', help='''If given, we don't write the reoptimized model to disk, but drop into a REPL instead ''') parser.add_argument('model', type=str, help='''The input camera model. If "-", we read standard input and write to standard output. We get the frame poses and extrinsics from this model. If --model-intrinsics isn't given, we get the intrinsics from this model as well''') args = parser.parse_args() return args args = parse_args() # arg-parsing is done before the imports so that --help works without building # stuff, so that I can generate the manpages and README # I import the LOCAL mrcal sys.path[:0] = f"{os.path.dirname(os.path.realpath(__file__))}/..", import mrcal import numpy as np import numpysane as nps import time model = mrcal.cameramodel(args.model) if args.model == '-': # Input read from stdin. Write output to stdout file_output = sys.stdout else: if args.outdir is None: filename_base,extension = os.path.splitext(args.model) file_output = f"{filename_base}-reoptimized{extension}" else: f,extension = os.path.splitext(args.model) directory,filename_base = os.path.split(f) file_output = f"{args.outdir}/{filename_base}-reoptimized{extension}" if os.path.exists(file_output) and not args.force: print(f"ERROR: '{file_output}' already exists. Not overwriting this file. Pass -f to overwrite", file=sys.stderr) sys.exit(1) optimization_inputs = model.optimization_inputs() if args.perfect: if args.model_intrinsics is not None: model_intrinsics = mrcal.cameramodel(args.model_intrinsics) if model_intrinsics.intrinsics()[0] != model.intrinsics()[0]: print("At this time, --model-intrinsics MUST use the same lens model as the reference model", file=sys.stderr) sys.exit(1) optimization_inputs['intrinsics'][model.icam_intrinsics()] = \ model_intrinsics.intrinsics()[1] mrcal.make_perfect_observations(optimization_inputs) ######### Reoptimize optimization_inputs['verbose'] = args.verbose optimization_inputs['do_apply_outlier_rejection'] = not args.skip_outlier_rejection if args.revive_outliers: for what in ('observations_board','observations_point'): observations = optimization_inputs[what] print(f"Original solve has {np.count_nonzero(observations[...,2] <= 0)} outliers in {what}. Reviving them") print("") observations[observations[...,2] <= 0, 2] = 1. mrcal.optimize(**optimization_inputs) model = mrcal.cameramodel(optimization_inputs = optimization_inputs, icam_intrinsics = model.icam_intrinsics()) if not args.explore: note = \ "generated on {} with {}\n". \ format(time.strftime("%Y-%m-%d %H:%M:%S"), ' '.join(mrcal.shellquote(s) for s in sys.argv)) model.write(file_output, note=note) if isinstance(file_output, str): print(f"Wrote '{file_output}'", file=sys.stderr) sys.exit(0) print("") print("Done. The results are in the 'model' and 'optimization_inputs' variables") print("") import IPython IPython.embed() mrcal-2.4.1/analyses/mrcal-reproject-to-chessboard 0000775 0000000 0000000 00000015750 14563016623 0022223 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Reproject calibration images to show just the chessboard in its nominal coordinates SYNOPSIS D=../mrcal-doc-external/2022-11-05--dtla-overpass--samyang--alpha7/2-f22-infinity; analyses/mrcal-reproject-to-chessboard \ --image-directory $D/images \ --outdir /tmp/frames-splined \ $D/splined.cameramodel Outliers are highlighted. This is useful to visualize calibration errors. A perfect solve would display exactly the same calibration grid with every frame. In reality, we see the small errors in the calibration, and how they affect the individual chessboard corner observations. The intent of this tool is to be able to see any unmodeled chessboard deformations. It's not yet clear if this tool can do that effectively ''' import sys import argparse import re import os def parse_args(): parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('--image-path-prefix', help='''If given, we prepend the given prefix to the image paths. Exclusive with --image-directory''') parser.add_argument('--image-directory', help='''If given, we extract the filenames from the image paths in the solve, and use the given directory to find those filenames. Exclusive with --image-path-prefix''') parser.add_argument('--outdir', type=lambda d: d if os.path.isdir(d) else \ parser.error(f"--outdir requires an existing directory as the arg, but got '{d}'"), default='.', help='''Directory to write the output into. If omitted, we use the current directory''') parser.add_argument('--resolution-px-m', default = 1000, type = float, help='''The resolution of the output image in pixels/m. Defaults to 1000''') parser.add_argument('--force', '-f', action='store_true', default=False, help='''By default existing files are not overwritten. Pass --force to overwrite them without complaint''') parser.add_argument('model', type=str, help='''The input camera model. We use its optimization_inputs''') args = parser.parse_args() if args.image_path_prefix is not None and \ args.image_directory is not None: print("--image-path-prefix and --image-directory are mutually exclusive", file=sys.stderr) sys.exit(1) return args args = parse_args() # arg-parsing is done before the imports so that --help works without building # stuff, so that I can generate the manpages and README import numpy as np import numpysane as nps import cv2 import mrcal try: model = mrcal.cameramodel(args.model) except Exception as e: print(f"Couldn't load camera model '{args.model}': {e}", file=sys.stderr) sys.exit(1) optimization_inputs = model.optimization_inputs() if optimization_inputs is None: print("The given model MUST have the optimization_inputs for this tool to be useful", file=sys.stderr) sys.exit(1) object_height_n,object_width_n = optimization_inputs['observations_board'].shape[1:3] object_spacing = optimization_inputs['calibration_object_spacing'] # I cover a space of N+1 squares wide/tall: N-1 between all the corners + 1 on # either side. I span squares -1..N inclusive Nx = int(args.resolution_px_m * object_spacing * (object_width_n + 1) + 0.5) Ny = int(args.resolution_px_m * object_spacing * (object_height_n + 1) + 0.5) # shape (Nframes,6) rt_ref_frame = optimization_inputs['frames_rt_toref'] rt_cam_ref = optimization_inputs['extrinsics_rt_fromref'] ifcice = optimization_inputs['indices_frame_camintrinsics_camextrinsics'] observations = optimization_inputs['observations_board'] lensmodel = optimization_inputs['lensmodel'] intrinsics_data = optimization_inputs['intrinsics'] imagepaths = optimization_inputs.get('imagepaths') if imagepaths is None: print("The given model MUST have the image paths for this tool to be useful", file=sys.stderr) sys.exit(1) # shape (Ny,Nx,3) p_frame = \ mrcal.ref_calibration_object(object_width_n,object_height_n, object_spacing, x_corner0 = -1, x_corner1 = object_width_n, Nx = Nx, y_corner0 = -1, y_corner1 = object_height_n, Ny = Ny, calobject_warp = optimization_inputs['calobject_warp']) for i in range(len(ifcice)): iframe, icamintrinsics, icamextrinsics = ifcice[i] p_ref = \ mrcal.transform_point_rt(rt_ref_frame[iframe], p_frame) if icamextrinsics >= 0: p_cam = \ mrcal.transform_point_rt(rt_cam_ref[icamextrinsics], p_ref) else: p_cam = p_ref # shape (Ny,Nx,2) q = mrcal.project(p_cam, lensmodel, intrinsics_data[icamintrinsics]).astype(np.float32) imagepath = imagepaths[i] if args.image_path_prefix is not None: imagepath = f"{args.image_path_prefix}/{imagepath}" elif args.image_directory is not None: imagepath = f"{args.image_directory}/{os.path.basename(imagepath)}" try: image = mrcal.load_image(imagepath, bits_per_pixel = 24, channels = 3) except: print(f"Couldn't read image at '{imagepath}'", file=sys.stderr) sys.exit(1) image_out = mrcal.transform_image(image,q) # shape (Ny,Nx) weight = observations[i,:,:,2] mask_outlier = weight<=0 # Mark all the outliers for iy,ix in np.argwhere(mask_outlier): # iy,ix index corners. I need to convert the to pixels in my image # I have pixels = linspace(-1,object_width_n,Nx) # So: qx = (ix+1)*(Nx-1) / (object_width_n +1) qy = (iy+1)*(Ny-1) / (object_height_n+1) # Red circle around the outlier red = (0,0,255) cv2.circle(image_out, center = np.round(np.array((qx,qy))).astype(int), radius = 10, color = red, thickness = 2) imagepath_out = f"{args.outdir}/{os.path.basename(imagepath)}" if not args.force and os.path.exists(imagepath_out): print("File already exists: '{imagepath_out}'. Not overwriting; pass --force to overwrite. Giving up.", file=sys.stderr) sys.exit(1) mrcal.save_image(imagepath_out, image_out) print(f"Wrote '{imagepath_out}'") mrcal-2.4.1/analyses/mrcal-show-model-resolution.py 0000775 0000000 0000000 00000013621 14563016623 0022374 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Display the imager resolution''' import sys import argparse import re import os def parse_args(): parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('--gridn', type=int, default = (60,40), nargs = 2, help='''How densely we should sample the imager. By default we use a 60x40 grid''') parser.add_argument('--cbmin', type=float, default=0, help='''Minimum range of the colorbar''') parser.add_argument('--cbmax', type=float, help='''Maximum range of the colorbar''') parser.add_argument('--title', type=str, default = None, help='''Title string for the plot''') parser.add_argument('--hardcopy', type=str, help='''Write the output to disk, instead of making an interactive plot''') parser.add_argument('--terminal', type=str, help=r'''gnuplotlib terminal. The default is good almost always, so most people don't need this option''') parser.add_argument('--set', type=str, action='append', help='''Extra 'set' directives to gnuplotlib. Can be given multiple times''') parser.add_argument('--unset', type=str, action='append', help='''Extra 'unset' directives to gnuplotlib. Can be given multiple times''') parser.add_argument('model', type=str, help='''Camera model''') args = parser.parse_args() return args args = parse_args() import numpy as np import numpysane as nps import gnuplotlib as gp import mrcal def mean_resolution__rad_pixel(q, model): r'''Find the mean resolution in rad/pixel at q v = unproject(q). Let a rotation R = [u0 u1 v] dq = dqdv dv = dqdv R Rt dv I have about dv normal to v (the space spanned by (u0,u1)), so let's assume that v is in this space: inner(dv,v) = 0 Rt dv = [u0t dv] = [a] [u1t dv] [b] [ 0 ] [0] Let ab = [a b]t dq = M ab where M = dqdv [u0 u1]. M is (2,2) norm2(dq) = abt MtM ab minmax(norm2(dq)) = eig(MtM) dq = sqrt(eig(MtM)) pixels/rad ''' v = mrcal.unproject(q, *model.intrinsics(), normalize = True) _,dq_dv,_ = mrcal.project(v, *model.intrinsics(), get_gradients = True) # Use R_aligned_to_vector(). Add broadcasting to that function? @nps.broadcast_define( ( (3,), ), (3,3) ) def rotation_any_v_to_z(v): r'''Return any rotation matrix that maps the given unit vector v to [0,0,1]''' z = v if np.abs(v[0]) < .9: x = np.array((1,0,0), dtype=float) else: x = np.array((0,1,0), dtype=float) x -= nps.inner(x,v)*v x /= nps.mag(x) y = np.cross(z,x) return nps.cat(x,y,z) # shape (...,3,3) Rt = rotation_any_v_to_z(v) # shape (...,2,2) M = nps.matmult(dq_dv, nps.transpose(Rt)[...,:,:2]) # Let MtM = (a b). If l is an eigenvalue then # (b c) # # (a-l)*(c-l) - b^2 = 0 --> l^2 - (a+c) l + ac-b^2 = 0 # # --> l = (a+c +- sqrt( a^2 + 2ac + c^2 - 4ac + 4b^2)) / 2 = # = (a+c +- sqrt( a^2 - 2ac + c^2 + 4b^2)) / 2 = # = (a+c)/2 +- sqrt( (a-c)^2/4 + b^2) a = nps.inner(M[...,:,0], M[...,:,0]) b = nps.inner(M[...,:,0], M[...,:,1]) c = nps.inner(M[...,:,1], M[...,:,1]) sqrt_discriminant = np.sqrt( (a-c)*(a-c)/4 + b*b) l0 = (a+c)/2 + sqrt_discriminant l1 = (a+c)/2 - sqrt_discriminant resolution_pix_rad_max = np.sqrt(l0) # real in case roundoff error makes l1<0 resolution_pix_rad_min = np.real(np.sqrt(l1)) # The resolution is an ellipse (different directions could have different # resolutions). Here I assume it's a circle, and take the average of the # axis lengths resolution_pix_rad = (resolution_pix_rad_min + resolution_pix_rad_max)/2 return 1./resolution_pix_rad plot_options = dict(hardcopy = args.hardcopy, terminal = args.terminal) if args.set is not None: plot_options['set'] = args.set if args.unset is not None: plot_options['unset'] = args.unset if args.title is not None: plot_options['title'] = args.title def openmodel(f): try: return mrcal.cameramodel(f) except Exception as e: print(f"Couldn't load camera model '{f}': {e}", file=sys.stderr) sys.exit(1) model = openmodel(args.model) W,H = model.imagersize() q = np.ascontiguousarray( \ nps.mv( nps.cat(*np.meshgrid( np.linspace(0, W-1, args.gridn[0]), np.linspace(0, H-1, args.gridn[1]) )), 0,-1) ) resolution__deg_pixel = mean_resolution__rad_pixel(q, model) * 180./np.pi if args.cbmax is None: # ceil 2-digit base-10 args.cbmax = np.max(resolution__deg_pixel) base10_floor = np.power(10., np.floor(np.log10(args.cbmax))) args.cbmax = np.ceil(args.cbmax / base10_floor) * base10_floor curve_options = \ mrcal.visualization._options_heatmap_with_contours( # update these plot options plot_options, contour_min = args.cbmin, contour_max = args.cbmax, imagersize = model.imagersize(), gridn_width = args.gridn[0], gridn_height = args.gridn[1]) gp.plot( (resolution__deg_pixel, curve_options), **plot_options, wait = args.hardcopy is None) mrcal-2.4.1/analyses/noncentral/ 0000775 0000000 0000000 00000000000 14563016623 0016604 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/noncentral/trace-noncentral-ray.py 0000775 0000000 0000000 00000004304 14563016623 0023212 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Visualize noncentrality of a model Given a model and query pixel this function - Computes a ray from the camera center to the unprojection at infinity - Samples different distances along this ray - Projects them, and visualizes the difference projection from the sample point As we get closer to the camera, stronger and stronger noncentral effects are observed ''' import sys import numpy as np import numpysane as nps import gnuplotlib as gp import re import mrcal model_filename = sys.argv[1] qref = np.array((100,100), dtype=float) model = mrcal.cameramodel(model_filename) lensmodel,intrinsics_data = model.intrinsics() if not mrcal.lensmodel_metadata_and_config(lensmodel)['noncentral']: print("The given model isn't noncentral. Nothing to do", file=sys.stderr) sys.exit(1) if not re.match('^LENSMODEL_CAHVORE_', lensmodel): print("This is only implemented for CAHVORE today", file=sys.stderr) sys.exit(1) # Special-case to centralize CAHVORE intrinsics_data_centralized = intrinsics_data.copy() intrinsics_data_centralized[-3:] = 0 v_at_infinity = \ mrcal.unproject(qref, lensmodel, intrinsics_data_centralized, normalize = True) if 0: # another thing to do is to compute deltaz # dz is a function of th # pc = p - [0,0,dz] # tan(th) = pxy / (pz - dz) # q = project_central(pc) # At infinity pz >> dz -> tan(th) = pxy/pz # At p I have q = project_central(pc) # unproject_central(q) = k*pc # p - k*pc = [0,0,dz] d = 0.2 p = v_at_infinity*d q = mrcal.project (p, lensmodel, intrinsics_data) vc = mrcal.unproject(q, lensmodel, intrinsics_data_centralized, normalize = True) k01 = p[:2]/vc[:2] dz = p[2] - k01*vc[2] import IPython IPython.embed() sys.exit() Ndistances = 100 d = np.linspace(0.01, 10., Ndistances) # shape (Ndistances, 3) p = nps.dummy(d, -1) * v_at_infinity # shape (Ndistances, 2) q = mrcal.project(p, *model.intrinsics()) # shape (Ndistances,) qshift = nps.mag(q - qref) gp.plot( d, qshift, _with = 'linespoints', xlabel = 'Distance (m)', ylabel = 'Pixel shift (pixels)', wait = True ) mrcal-2.4.1/analyses/outlierness/ 0000775 0000000 0000000 00000000000 14563016623 0017015 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/outlierness/outlierness-test.py 0000775 0000000 0000000 00000044333 14563016623 0022732 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 '''This is a set of simple experiments to test the outlier-rejection and sensitivity logic''' import sys import os import numpy as np import numpysane as nps import gnuplotlib as gp usage = "Usage: {} order Npoints noise_stdev".format(sys.argv[0]) if len(sys.argv) != 4: print usage sys.exit(1) try: order = int( sys.argv[1]) # order. >= 1 N = int( sys.argv[2]) # This many points in the dataset noise_stdev = float(sys.argv[3]) # The dataset is corrupted thusly except: print usage sys.exit(1) if order < 1 or N <= 0 or noise_stdev <= 0: print usage sys.exit(1) Nquery = 70 # This many points for post-fit uncertainty-evaluation reference_equation = '0.1*(x+0.2)**2. + 3.0' def report_mismatch_relerr(a,b, what): relerr = np.abs(a-b) / ( (a+b)/2.) if relerr > 1e-6: print "MISMATCH for {}: relerr = {}, a = {}, b = {},".format(what,relerr,a,b); def report_mismatch_abserr(a,b, what): abserr = np.abs(a-b) if abserr > 1e-6: print "MISMATCH for {}: abserr = {}, a = {}, b = {},".format(what,abserr,a,b); def model_matrix(q, order): r'''Returns the model matrix S for particular domain points Here the "order" is the number of parameters in the fit. Thus order==2 means "linear" and order==3 means "quadratic"" ''' q = nps.atleast_dims(q,-1) return nps.transpose(nps.cat(*[q ** i for i in range(order)])) def func_reference(q): '''Reference function: reference_equation Let's say I care about the range [0..1]. It gets less linear as I get further away from 0 ''' # needs to manually match 'reference_equation' above return 0.1 * (q+0.2)*(q+0.2) + 3.0 def func_hypothesis(q, b): '''Hypothesis based on parameters ''' S = model_matrix(q, len(b)) return nps.matmult(b, nps.transpose(S)) def compute_outliernesses(J, x, jq, k_dima, k_cook): '''Computes all the outlierness/Cook's D metrics I have 8 things I can compute coming from 3 yes/no choices. These are all very similar, with two pairs actually coming out identical. I choose: - Are we detecting outliers, or looking at effects of a new query point? - Dima's outlierness factor or Cook's D - Look ONLY at the effect on the other variables, or on the other variables AND self? If we're detecting outliers, we REMOVE measurements from the dataset, and see what happens to the fit. If we're looking at effects of a new query point, we see what happend if we ADD measurements Dima's outlierness factor metric looks at what happens to the cost function E = norm2(x). Specifically I look at (norm2(x_before) - norm(x_after))/Nmeasurements Cook's D instead looks at (norm2(x_before - x_after)) * k for some constant k. Finally, we can decide whether to include the effects on the measurements we're adding/removing, or not. Note that here I only look at adding/removing SCALAR measurements ============= This is similar-to, but not exactly-the-same-as Cook's D. I assume the least squares fit optimizes a cost function E = norm2(x). The outlierness factor I return is f = 1/Nmeasurements (E(outliers and inliers) - E(inliers only)) For a scalar measurement, this solves to k = xo^2 / Nmeasurements B = 1.0/(jt inv(JtJ) j - 1) f = -k * B (see the comment in dogleg_getOutliernessFactors() for a description) Note that my metric is proportional to norm2(x_io) - norm2(x_i). This is NOT the same as Cook's distance, which is proportional to norm2(x_io - x_i). It's not yet obvious to me which is better There're several slightly-different definitions of Cook's D and of a rule-of-thumb threshold floating around on the internet. Wikipedia says: D = norm2(x_io - x_i)^2 / (Nstate * norm2(x_io)/(Nmeasurements - Nstate)) D_threshold = 1 An article https://www.nature.com/articles/nmeth.3812 says D = norm2(x_io - x_i)^2 / ((Nstate+1) * norm2(x_io)/(Nmeasurements - Nstate -1)) D_threshold = 4/Nmeasurements Here I use the second definition. That definition expands to k = xo^2 / ((Nstate+1) * norm2(x_io)/(Nmeasurements - Nstate -1)) B = 1.0/(jt inv(JtJ) j - 1) f = k * (B + B*B) ''' Nmeasurements,Nstate = J.shape # The A values for each measurement Aoutliers = nps.inner(J, nps.transpose(np.linalg.pinv(J))) Aquery = nps.inner(jq, nps.transpose(np.linalg.solve(nps.matmult(nps.transpose(J),J), nps.transpose(jq)))) def dima(): k = k_dima k = 1 # Here the metrics are linear, so self + others = self_others def outliers(): B = 1.0 / (Aoutliers - 1.0) return dict( self = k * x*x, others = k * x*x*(-B-1), self_others = k * x*x*(-B )) def query(): B = 1.0 / (Aquery + 1.0) return dict( self = k * ( B*B), others = k * (B-B*B), self_others = k * (B)) return dict(outliers = outliers(), query = query()) def cook(): k = k_cook k = 1 # Here the metrics maybe aren't linear (I need to think about it), so # maybe self + others != self_others. I thus am not returning the "self" # metric def outliers(): B = 1.0 / (Aoutliers - 1.0) return dict( self_others = k * x*x*(B+B*B ) , others = k * x*x*(-B-1)) def query(): B = 1.0 / (Aquery + 1.0) return dict( self_others = k * (1-B) , others = k * (B-B*B)) return dict(outliers = outliers(), query = query()) return dict(cook = cook(), dima = dima()) def outlierness_test(J, x, f, outlierness, k_dima, k_cook, i=0): r'''Test the computation of outlierness I have an analytical expression for this computed in compute_outliernesses(). This explicitly computes the quantity represented by compute_outliernesses() to make sure that that analytical expression is correct ''' # I reoptimize without measurement i E0 = nps.inner(x,x) J1 = nps.glue(J[:i,:], J[(i+1):,:], axis=-2) f1 = nps.glue(f[:i ], f[(i+1): ], axis=-1) b1 = nps.matmult( f1, nps.transpose(np.linalg.pinv(J1))) x1 = nps.matmult(b1, nps.transpose(J1)) - f1 E1 = nps.inner(x1,x1) report_mismatch_relerr( (E0-E1) * k_dima, outlierness['self_others'][i], "self_others outlierness computed analytically, explicitly") report_mismatch_relerr( (E0-x[i]*x[i] - E1) * k_dima, outlierness['others'][i], "others outlierness computed analytically, explicitly") def CooksD_test(J, x, f, CooksD, k_dima, k_cook, i=0): r'''Test the computation of Cook's D I have an analytical expression for this computed in compute_outliernesses(). This explicitly computes the quantity represented by compute_outliernesses() to make sure that that analytical expression is correct ''' # I reoptimize without measurement i Nmeasurements,Nstate = J.shape J1 = nps.glue(J[:i,:], J[(i+1):,:], axis=-2) f1 = nps.glue(f[:i ], f[(i+1): ], axis=-1) b1 = nps.matmult( f1, nps.transpose(np.linalg.pinv(J1))) x1 = nps.matmult(b1, nps.transpose(J)) - f dx = x1-x report_mismatch_relerr( nps.inner(dx,dx) * k_cook, CooksD['self_others'][i], "self_others CooksD computed analytically, explicitly") report_mismatch_relerr( (nps.inner(dx,dx) - dx[i]*dx[i]) * k_cook, CooksD['others'][i], "others CooksD computed analytically, explicitly") def outlierness_query_test(J,b,x, f, query,fquery_ref, outlierness_nox, k_dima, k_cook, i=0): r'''Test the concept of outlierness for querying hypothetical data fquery_test = f(q) isn't true here. If it WERE true, the x of the query point would be 0 (we fit the model exactly), so the outlierness factor would be 0 also ''' # current solve E0 = nps.inner(x,x) query = query [i] fquery_ref = fquery_ref[i] # I add a new point, and reoptimize fquery = func_hypothesis(query,b) xquery = fquery - fquery_ref jquery = model_matrix(query, len(b)) J1 = nps.glue(J, jquery, axis=-2) f1 = nps.glue(f, fquery_ref, axis=-1) b1 = nps.matmult( f1, nps.transpose(np.linalg.pinv(J1))) x1 = nps.matmult(b1, nps.transpose(J1)) - f1 E1 = nps.inner(x1,x1) report_mismatch_relerr( (x1[-1]*x1[-1]) * k_dima, outlierness_nox['self'][i]*xquery*xquery, "self query-outlierness computed analytically, explicitly") report_mismatch_relerr( (E1-x1[-1]*x1[-1] - E0) * k_dima, outlierness_nox['others'][i]*xquery*xquery, "others query-outlierness computed analytically, explicitly") report_mismatch_relerr( (E1 - E0) * k_dima, outlierness_nox['self_others'][i]*xquery*xquery, "self_others query-outlierness computed analytically, explicitly") def CooksD_query_test(J,b,x, f, query,fquery_ref, CooksD_nox, k_dima, k_cook, i=0): r'''Test the concept of CooksD for querying hypothetical data fquery_test = f(q) isn't true here. If it WERE true, the x of the query point would be 0 (we fit the model exactly), so the outlierness factor would be 0 also ''' # current solve Nmeasurements,Nstate = J.shape query = query [i] fquery_ref = fquery_ref[i] # I add a new point, and reoptimize fquery = func_hypothesis(query,b) xquery = fquery - fquery_ref jquery = model_matrix(query, len(b)) J1 = nps.glue(J, jquery, axis=-2) f1 = nps.glue(f, fquery_ref, axis=-1) b1 = nps.matmult( f1, nps.transpose(np.linalg.pinv(J1))) x1 = nps.matmult(b1, nps.transpose(J1)) - f1 dx = x1[:-1] - x dx_both = x1 - nps.glue(x,xquery, axis=-1) report_mismatch_relerr( nps.inner(dx_both,dx_both)*k_cook, CooksD_nox['self_others'][i]*xquery*xquery, "self_others query-CooksD computed analytically, explicitly") report_mismatch_relerr( nps.inner(dx,dx)*k_cook, CooksD_nox['others'][i]*xquery*xquery, "others query-CooksD computed analytically, explicitly") def Var_df(J, squery, stdev): r'''Propagates noise in input to noise in f noise in input -> noise in params -> noise in f db ~ M dm where M = inv(JtJ)Jt df = df/db db df/db = squery Var(dm) = stdev^2 I -> Var(df) = stdev^2 squery inv(JtJ) Jt J inv(JtJ) squeryt = = stdev^2 squery inv(JtJ) squeryt This function broadcasts over squery ''' return \ nps.inner(squery, nps.transpose(np.linalg.solve(nps.matmult(nps.transpose(J),J), nps.transpose(squery)))) *stdev*stdev def histogram(x, **kwargs): h,edges = np.histogram(x, bins=20, **kwargs) centers = (edges[1:] + edges[0:-1])/2 return h,centers,edges[1]-edges[0] def generate_dataset(N, noise_stdev): q = np.random.rand(N) fref = func_reference(q) fnoise = np.random.randn(N) * noise_stdev f = fref + fnoise return q,f def fit(q, f, order): S = model_matrix(q, order) J = S b = nps.matmult( f, nps.transpose(np.linalg.pinv(S))) x = func_hypothesis(q,b) - f return b,J,x def test_order(q,f, query, order): # I look at linear and quadratic models: a0 + a1 q + a2 q^2, with a2=0 for the # linear case. I use plain least squares. The parameter vector is [a0 a1 a2]t. S # = [1 q q^2], so the measurement vector x = S b - f. E = norm2(x). J = dx/db = # S. # # Note the problem "order" is the number of parameters, so a linear model has # order==2 b,J,x = fit(q,f,order) Nmeasurements,Nstate = J.shape k_dima = 1.0/Nmeasurements k_cook = 1.0/((Nstate + 1.0) * nps.inner(x,x)/(Nmeasurements - Nstate - 1.0)) report_mismatch_abserr(np.linalg.norm(nps.matmult(x,J)), 0, "Jtx") squery = model_matrix(query, order) fquery = func_hypothesis(query, b) metrics = compute_outliernesses(J,x, squery, k_dima, k_cook) outlierness_test(J, x, f, metrics['dima']['outliers'], k_dima, k_cook, i=10) CooksD_test (J, x, f, metrics['cook']['outliers'], k_dima, k_cook, i=10) outlierness_query_test(J,b,x,f, query, fquery + 1.2e-3, metrics['dima']['query'], k_dima, k_cook, i=10 ) CooksD_query_test (J,b,x,f, query, fquery + 1.2e-3, metrics['cook']['query'], k_dima, k_cook, i=10 ) Vquery = Var_df(J, squery, noise_stdev) return \ dict( b = b, J = J, x = x, Vquery = Vquery, squery = squery, fquery = fquery, metrics = metrics, k_dima = k_dima, k_cook = k_cook ) q,f = generate_dataset(N, noise_stdev) query = np.linspace(-1,2, Nquery) stats = test_order(q,f, query, order) def show_outlierness(order, N, q, f, query, cooks_threshold, **stats): p = gp.gnuplotlib(equation='1.0 title "Threshold"', title = "Outlierness with order={} Npoints={} stdev={}".format(order, N, noise_stdev)) p.plot( (q, stats['metrics']['dima']['outliers']['self_others']/stats['dimas_threshold'], dict(legend="Dima's self+others outlierness / threshold", _with='points')), (q, stats['metrics']['dima']['outliers']['others']/stats['dimas_threshold'], dict(legend="Dima's others-ONLY outlierness / threshold", _with='points')), (q, stats['metrics']['cook']['outliers']['self_others']/cooks_threshold, dict(legend="Cook's self+others outlierness / threshold", _with='points')), (q, stats['metrics']['cook']['outliers']['others']/cooks_threshold, dict(legend="Cook's others-ONLY outlierness / threshold", _with='points'))) return p def show_uncertainty(order, N, q, f, query, cooks_threshold, **stats): coeffs = stats['b'] fitted_equation = '+'.join(['{} * x**{}'.format(coeffs[i], i) for i in range(len(coeffs))]) p = gp.gnuplotlib(equation='({})-({}) title "Fit error off ground truth; y2 axis +- noise stdev" axis x1y2'.format(reference_equation,fitted_equation), title = "Uncertainty with order={} Npoints={} stdev={}".format(order, N, noise_stdev), ymin=0, y2range = (-noise_stdev, noise_stdev), _set = 'y2tics' ) # p.plot( # (query, np.sqrt(stats['Vquery]), # dict(legend='expectederr (y2)', _with='lines', y2=1)), # (query, stats['metrics']['dima']['query']['self_others']*noise_stdev*noise_stdev / stats['dimas_threshold'], # dict(legend="Dima's self+others query / threshold", # _with='linespoints')), # (query, stats['metrics']['dima']['query']['others']*noise_stdev*noise_stdev / stats['dimas_threshold'], # dict(legend="Dima's others-ONLY query / threshold", # _with='linespoints')), # (query, stats['metrics']['cook']['query']['self_others']*noise_stdev*noise_stdev / cooks_threshold, # dict(legend="Cook's self+others query / threshold", # _with='linespoints')), # (query, stats['metrics']['cook']['query']['others']*noise_stdev*noise_stdev / cooks_threshold, # dict(legend="Cook's others-ONLY query / threshold", # _with='linespoints'))) p.plot( # (query, np.sqrt(Vquery), # dict(legend='Expected error due to input noise (y2)', _with='lines', y2=1)), (query, np.sqrt(stats['metrics']['dima']['query']['self'])*noise_stdev, dict(legend="Dima's self-ONLY; 1 point", _with='linespoints')), (query, np.sqrt(stats['metrics']['dima']['query']['others'])*noise_stdev, dict(legend="Dima's others-ONLY ALL {} points".format(Nquery), _with='linespoints')), # (query, np.sqrt(stats['metrics']['dima']['query']['others'])*noise_stdev * Nquery, # dict(legend="Dima's others-ONLY 1 point average", # _with='linespoints')), ) return p def show_fit (order, N, q, f, query, cooks_threshold, **stats): p = gp.gnuplotlib(equation='{} with lines title "reference"'.format(reference_equation), xrange=[-1,2], title = "Fit with order={} Npoints={} stdev={}".format(order, N, noise_stdev)) p.plot((q, f, dict(legend = 'input data', _with='points')), (query, stats['fquery'] + np.sqrt(stats['Vquery'])*np.array(((1,),(0,),(-1,),)), dict(legend = 'stdev_f', _with='lines'))) return p def show_distribution(outlierness): h,c,w = histogram(outlierness) raise Exception("not done") #gp.plot() # This is hoaky, but reasonable, I think. Some of the unscaled metrics are # identical between mine and Cook's expressions. So I scale Cook's 4/N threshold # to apply to me. Works ok. cooks_threshold = 4.0 / N stats['dimas_threshold'] = cooks_threshold / stats['k_cook'] * stats['k_dima'] # These all show interesting things; turn one of them on plots = [ show_outlierness (order, N, q, f, query, cooks_threshold, **stats), show_fit (order, N, q, f, query, cooks_threshold, **stats), show_uncertainty (order, N, q, f, query, cooks_threshold, **stats) ] for p in plots: if os.fork() == 0: p.wait() sys.exit() os.wait() # Conclusions: # # - Cook's 4/N threshold looks reasonable. # # - For detecting outliers my self+others metric is way too outlier-happy. It # makes most of the linear fit points look outliery # # - For uncertainty, Dima's self+others is the only query metric that's maybe # usable. It qualitatively a negated and shifted Cook's self+others mrcal-2.4.1/analyses/splines/ 0000775 0000000 0000000 00000000000 14563016623 0016116 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/splines/README 0000664 0000000 0000000 00000003347 14563016623 0017005 0 ustar 00root root 0000000 0000000 The test and validation tools used in the development of the b-spline interpolated models live here. First I wanted to come up with a surface parametrization scheme. I wanted to project the full almost-360deg view into some sort of 2D domain that is cartesian-ish. Then I would grid this cartesian 2D domain with a regular grid of control points, and I'd spline that. But does such a mapping exist? What is it? The show-fisheye-grid.py script exists for this purpose. It takes in an existing model, grids the imager, unprojects the points, and projects the vectors back to a number of common fisheye projections. I applied this to some wide models I had (their inaccurate OPENCV8 fits), and observed that the stereographic model works well here: a cartensian grid in the imager produces a cartesian-ish grid in the stereographic projection. The other projections look like they'd work also. The pinhole projection most notably does NOT work; its projections run off to infinity at the edges. Then I needed to find a surface representation, and the expressions. I'm using b-splines: they have nice continuity properties, and they have nice local support: so my jacobian will remain sparse. The bsplines.py script derives and validates the spline equations. fpbisp.py is a part of that. Then I implemented the spline equations in mrcal.c, and wanted to make sure the behavior was correct. This is done by the verify-interpolated-surface.py script. It produces a random splined model, projects it with mrcal, and visualizes the sparse control point surface with the dense sampled surface. The sampled surface should appear smooth, and should be guided by the control points. This tool is the first test that the mrcal projection is implemented correctly mrcal-2.4.1/analyses/splines/bsplines.py 0000775 0000000 0000000 00000037100 14563016623 0020313 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Studies the 2d surface b-spline interpolation The expressions for the interpolated surface I need aren't entirely clear, so I want to study this somewhat. This script does that. I want to use b-splines to drive the generic camera models. These provide local support in the control points at the expense of not interpolating the control points. I don't NEED an interpolating spline, so this is just fine. Let's assume the knots lie at integer coordinates. I want a function that takes in - the control points in a neighborhood of the spline segment - the query point x, scaled to [0,1] in the spline segment This is all nontrivial for some reason, and there're several implementations floating around with slightly different inputs. I have - sample_segment_cubic() From a generic calibration research library: https://github.com/puzzlepaint/camera_calibration/blob/master/applications/camera_calibration/scripts/derive_jacobians.py in the EvalUniformCubicBSpline() function. This does what I want (query point in [0,1], control points around it, one segment at a time), but the source of the expression isn't given. I'd like to re-derive it, and then possibly extend it - splev_local() wraps scipy.interpolate.splev() - splev_translated() Followed sources of scipy.interpolate.splev() to the core fortran functions in fpbspl.f and splev.f. I then ran these through f2c, pythonified it, and simplified it. The result is sympy-able - splev_wikipedia() Sample implementation of De Boor's algorithm from wikipedia: https://en.wikipedia.org/wiki/De_Boor%27s_algorithm from EvalUniformCubicBSpline() in camera_calibration/applications/camera_calibration/scripts/derive_jacobians.py. translated to [0,1] from [3,4] ''' import sys import numpy as np import numpysane as nps import gnuplotlib as gp skip_plots = False import scipy.interpolate from scipy.interpolate import _fitpack def sample_segment_cubic(x, a,b,c,d): A = (-x**3 + 3*x**2 - 3*x + 1)/6 B = (3 * x**3/2 - 3*x**2 + 2)/3 C = (-3 * x**3 + 3*x**2 + 3*x + 1)/6 D = (x * x * x) / 6 return A*a + B*b + C*c + D*d def splev_local(x, t, c, k, der=0, ext=0): y = scipy.interpolate.splev(x, (t,c,k), der, ext) return y.reshape(x.shape) def splev_translated(x, t, c, k, l): # assumes that t[l] <= x <= t[l+1] # print(f"l = {l}") # print((t[l], x, t[l+1])) l += 1 # l is now a fortran-style index hh = [0] * 19 h__ = [0] * 19 h__[-1 + 1] = 1 i__1 = k for j in range(1,i__1+1): i__2 = j for i__ in range(1,i__2+1): hh[-1 + i__] = h__[-1 + i__] h__[-1 + 1] = 0 i__2 = j for i__ in range(1,i__2+1): li = l + i__ lj = li - j if t[-1 + li] != t[-1 + lj]: h__[-1 + i__] += (t[-1 + li] - x) * hh[-1 + i__] / (t[-1 + li] - t[-1 + lj]) h__[-1 + i__ + 1] = (x - t[-1 + lj]) * hh[-1 + i__] / (t[-1 + li] - t[-1 + lj]) else: h__[-1 + i__ + 1] = 0 sp = 0 ll = l - (k+1) i__2 = (k+1) for j in range(1,i__2+1): ll += 1 sp += c[-1 + ll] * h__[-1 + j] return sp # from https://en.wikipedia.org/wiki/De_Boor%27s_algorithm def splev_wikipedia(k: int, x: int, t, c, p: int): """Evaluates S(x). Arguments --------- k: Index of knot interval that contains x. x: Position. t: Array of knot positions, needs to be padded as described above. c: Array of control points. We will look at c[k-p .. k] p: Degree of B-spline. """ # make sure I never reference c out of bounds if k-p < 0: raise Exception("c referenced out of min bounds") if k >= len(c): raise Exception("c referenced out of max bounds") d = [c[j + k - p] for j in range(0, p+1)] for r in range(1, p+1): for j in range(p, r-1, -1): alpha = (x - t[j+k-p]) / (t[j+1+k-r] - t[j+k-p]) d[j] = (1 - alpha) * d[j-1] + alpha * d[j] return d[p] ############################################################################## # First I confirm that the functions from numpy and from wikipedia produce the # same results. I don't care about edge cases for now. I query an arbitrary # point N = 30 t = np.arange( N, dtype=int) k = 3 c = np.random.rand(len(t)) x = 4.3 # may have trouble exactly AT the knots. the edge logic isn't quite what I want l = np.searchsorted(np.array(t), x)-1 y0 = splev_local(np.array((x,)),t,c,k)[0] y1 = splev_translated(x, t, c, k, l) y2 = splev_wikipedia(l, x, t, c, k) print(f"These should all match: {y0} {y1} {y2}") print(f" err1 = {y1-y0}") print(f" err2 = {y2-y0}") ############################################################################## # OK, good. I want to make sure that the spline roughly follows the curve # defined by the control points. There should be no x offset or anything of that # sort N = 30 t = np.arange( N, dtype=int) k = 3 c = np.random.rand(len(t)) Npad = 10 x = np.linspace(Npad, N-Npad, 1000) ########### sample_segment_cubic() @nps.broadcast_define(((),), ()) def sample_cubic(x, cp): i = int(x//1) q = x-i try: return sample_segment_cubic(q, *cp[i-1:i+3]) except: return 0 y = sample_cubic(x, c) c2 = c.copy() c2[int(N//2)] *= 1.1 y2 = sample_cubic(x, c2) if not skip_plots: plot1 = gp.gnuplotlib(title = 'Cubic splines: response to control point perturbation', # hardcopy = '/tmp/cubic-spline-perturbations.svg', # terminal = 'svg size 800,600 noenhanced solid dynamic font ",12"' ) plot1.plot( (x, nps.cat(y,y2), dict(_with = np.array(('lines lc "blue"', 'lines lc "sea-green"')), legend = np.array(('Spline: baseline', 'Spline: tweaked one control point')))), (t[:len(c)], nps.cat(c,c2), dict(_with = np.array(('points pt 1 ps 1 lc "blue"', 'points pt 2 ps 1 lc "sea-green"')), legend= np.array(('Control points: baseline', 'Control points: tweaked one control point')))), (x, y-y2, dict(_with = 'lines lc "red"', legend = 'Difference', y2 = 1)), _xrange = (10.5,19.5), y2max = 0.01, ylabel = 'Spline value', y2label = 'Difference due to perturbation',) ########### sample_wikipedia() @nps.broadcast_define(((),), ()) def sample_wikipedia(x, t, c, k): return splev_wikipedia(np.searchsorted(np.array(t), x)-1,x,t,c,k) @nps.broadcast_define(((),), ()) def sample_wikipedia_integer_knots(x, c, k): t = np.arange(len(c) + k, dtype=int) l = int(x//1) offset = int((k+1)//2) return splev_wikipedia(l,x,t,c[offset:],k) if 1: y = sample_wikipedia_integer_knots(x,c,k) else: offset = int((k+1)//2) y = sample_wikipedia(x,t, c[offset:], k) if not skip_plots: plot2 = gp.gnuplotlib(title = 'sample_wikipedia') plot2.plot( (x, y, dict(_with='lines')), (t[:len(c)], c, dict(_with='linespoints pt 7 ps 2')) ) print("these two plots should look the same: we're using two implementation of the same algorithm to interpolate the same data") plot2.wait() plot1.wait() # Now I use sympy to get the polynomial coefficients from sample_wikipedia. # These should match the ones in sample_segment_cubic() import sympy c = sympy.symbols(f'c:{len(c)}') x = sympy.symbols('x') # Which interval we're in. Arbitrary. In the middle somewhere l = int(N//2) print("Should match A,B,C,D coefficients in sample_segment_cubic()") s = splev_wikipedia(l, # I want 'x' to be [0,1] within the interval, but this # function wants x in the whole domain x+l, np.arange( N, dtype=int), c, k).expand() print(s.coeff(c[12])) print(s.coeff(c[13])) print(s.coeff(c[14])) print(s.coeff(c[15])) print("Should also match A,B,C,D coefficients in sample_segment_cubic()") s = splev_translated(# I want 'x' to be [0,1] within the interval, but this # function wants x in the whole domain x+l, np.arange( N, dtype=int), c, k, l).expand() print(s.coeff(c[12])) print(s.coeff(c[13])) print(s.coeff(c[14])) print(s.coeff(c[15])) ######################################################### # Great. More questions. Here I have a cubic spline (k==3). And to evaluate the # spline value I need to have 4 control point values available, 2 on either # side. Piecewise-linear splines are too rough, but quadratic splines could work # (k==2). What do the expressions look like? How many neighbors do I need? Here # the control point values c represent the function value between adjacent # knots, so each segment uses 3 neighboring control points, and is defined in a # region [-0.5..0.5] off the center control point print("======================== k = 2") N = 30 t = np.arange( N, dtype=int) k = 2 # c[0,1,2] corresponds to is t[-0.5 0.5 1.5 2.5 ... c = np.random.rand(len(t)) x = 4.3 # may have trouble exactly AT the knots. the edge logic isn't quite what I want l = np.searchsorted(np.array(t), x)-1 y0 = splev_local(np.array((x,)),t,c,k)[0] y1 = splev_translated(x, t, c, k, l) y2 = splev_wikipedia(l, x, t, c, k) print(f"These should all match: {y0} {y1} {y2}") print(f" err1 = {y1-y0}") print(f" err2 = {y2-y0}") ############################################################################## # OK, good. I want to make sure that the spline roughly follows the curve # defined by the control points. There should be no x offset or anything of that # sort if not skip_plots: N = 30 t = np.arange( N, dtype=int) k = 2 c = np.random.rand(len(t)) Npad = 10 x = np.linspace(Npad, N-Npad, 1000) offset = int((k+1)//2) y = sample_wikipedia(x,t-0.5, c[offset:], k) xm = (x[1:] + x[:-1]) / 2. d = np.diff(y) / np.diff(x) plot1 = gp.gnuplotlib(title = 'k==2; sample_wikipedia') plot1.plot( (x, y, dict(_with='lines', legend='spline')), (xm, d, dict(_with='lines', y2=1, legend='diff')), (t[:len(c)], c, dict(_with='linespoints pt 7 ps 2', legend='control points'))) @nps.broadcast_define(((),), ()) def sample_splev_translated(x, t, c, k): l = np.searchsorted(np.array(t), x)-1 return splev_translated(x,t,c,k,l) y = sample_splev_translated(x,t-0.5, c[offset:], k) xm = (x[1:] + x[:-1]) / 2. d = np.diff(y) / np.diff(x) plot2 = gp.gnuplotlib(title = 'k==2; splev_translated') plot2.plot( (x, y, dict(_with='lines', legend='spline')), (xm, d, dict(_with='lines', y2=1, legend='diff')), (t[:len(c)], c, dict(_with='linespoints pt 7 ps 2', legend='control points'))) # These are the functions I'm going to use. Derived by the sympy steps # immediately after this def sample_segment_quadratic(x, a,b,c): A = (4*x**2 - 4*x + 1)/8 B = (3 - 4*x**2)/4 C = (4*x**2 + 4*x + 1)/8 return A*a + B*b + C*c @nps.broadcast_define(((),), ()) def sample_quadratic(x, cp): i = int((x+0.5)//1) q = x-i try: return sample_segment_quadratic(q, *cp[i-1:i+2]) except: return 0 y = sample_quadratic(x, c) xm = (x[1:] + x[:-1]) / 2. d = np.diff(y) / np.diff(x) plot3 = gp.gnuplotlib(title = 'k==2; sample_quadratic') plot3.plot( (x, y, dict(_with='lines', legend='spline')), (xm, d, dict(_with='lines', y2=1, legend='diff')), (t[:len(c)], c, dict(_with='linespoints pt 7 ps 2', legend='control points'))) plot3.wait() plot2.wait() plot1.wait() print("these 3 plots should look the same: we're using different implementation of the same algorithm to interpolate the same data") # ################################## # # OK, these match. Let's get the expression of the polynomial in a segment. This # # was used to construct sample_segment_quadratic() above # c = sympy.symbols(f'c:{len(c)}') # x = sympy.symbols('x') # l = int(N//2) # print("A,B,C for k==2 using splev_wikipedia()") # s = splev_wikipedia(l, # # I want 'x' to be [-0.5..0.5] within the interval, but this # # function wants x in the whole domain # x+l, # np.arange( N, dtype=int) - sympy.Rational(1,2), c, k).expand() # print(s) # print(s.coeff(c[13])) # print(s.coeff(c[14])) # print(s.coeff(c[15])) # # I see this: # # c13*x**2/2 - c13*x/2 + c13/8 - c14*x**2 + 3*c14/4 + c15*x**2/2 + c15*x/2 + c15/8 # # x**2/2 - x/2 + 1/8 # # 3/4 - x**2 # # x**2/2 + x/2 + 1/8 ############## compare cubic, quadratic # I now have nice expressions for quadratic and cubic interpolations. Both are # C1 continuous, but the cubic interpolation is C2 continuous too. How much do I # care? Let's at least compare them visually N = 30 t = np.arange( N, dtype=int) c = np.random.rand(len(t)) Npad = 10 x = np.linspace(Npad, N-Npad, 1000) y2 = sample_quadratic(x,c) y3 = sample_cubic( x,c) if not skip_plots: gp.plot( (x, nps.cat(y2,y3), dict(_with='lines', legend=np.array(('quadratic', 'cubic')))), (t[:len(c)], c, dict(_with='linespoints pt 7 ps 2', legend='control points')), title = "Comparing quadratic and cubic b-spline interpolation", wait = True) # Visually, neither looks better than the other. The quadratic spline follows # the control points closer. Looks like it's trying to match the slope at the # halfway point. Maybe that's bad, and the quadratic curve is too wiggly? We'll # see #################################################################### # Great. Final set of questions: how do you we make a 2D spline surface? The # generic calibration research library # (https://github.com/puzzlepaint/camera_calibration) Does a set of 1d # interpolations in one dimension, and then interpolates the 4 interpolated # values along the other dimension. Questions: # # Does order matter? I can do x and then y or y then x # # And is there a better way? scipy has 2d b-spline interpolation routines. Do # they do something better? # # ############### x-y or y-x? import sympy from sympy.abc import x,y cp = sympy.symbols('cp:4(:4)') # x then y xs = [sample_segment_cubic( x, cp[i*4 + 0], cp[i*4 + 1], cp[i*4 + 2], cp[i*4 + 3] ) for i in range(4)] yxs = sample_segment_cubic(y, *xs) # y then x ys = [sample_segment_cubic( y, cp[0*4 + i], cp[1*4 + i], cp[2*4 + i], cp[3*4 + i] ) for i in range(4)] xys = sample_segment_cubic(x, *ys) print(f"Bicubic interpolation. x-then-y and y-then-x difference: {(xys - yxs).expand()}") ########### Alright. Apparently either order is ok. Does scipy do something ########### different for 2d interpolation? I compare the 1d-then-1d ########### interpolation above to fitpack results: from fpbisp import fpbisp_ N = 30 t = np.arange( N, dtype=int) k = 3 cp = np.array(sympy.symbols('cp:40(:40)'), dtype=np.object).reshape(40,40) lx = 3 ly = 5 z = fpbisp_(t, t, k, cp, x+lx, lx, y+ly, ly) err = z - yxs print(f"Bicubic interpolation. difference(1d-then-1d, FITPACK): {err.expand()}") # Apparently chaining 1D interpolations produces identical results to what FITPACK is doing mrcal-2.4.1/analyses/splines/fpbisp.py 0000664 0000000 0000000 00000003156 14563016623 0017760 0 ustar 00root root 0000000 0000000 r'''Translation of fpbpl and fpbisp from Fortran to Python The scipy source includes FITPACK, which implements b-spline interpolation. I converted its 2D surface interpolation routine (fpbisp) to C (via f2c), and then semi-manually to Python. I can then feed it sympy symbols, and get out analytical expressions, which are surprisingly difficult to find in a nice usable form on the internet. The original fpbisp implementation is at scipy/interpolate/fitpack/fpbisp.f The analysis that uses this conversion lives in bsplines.py ''' import sympy def fpbspl_(t, k, x, l): h__ = [None] * 100 hh = [None] * 100 h__[-1 + 1] = sympy.Integer(1) i__1 = k for j in range(1,i__1+1): i__2 = j for i__ in range(1,i__2+1): hh[i__ - 1] = h__[-1 + i__] h__[-1 + 1] = 0 i__2 = j for i__ in range(1,i__2+1): li = l + i__ lj = li - j f = hh[i__ - 1] / (t[li] - t[lj]) h__[-1 + i__] += f * (t[li] - x) h__[-1 + i__ + 1] = f * (x - t[lj]) return h__ # argx is between tx[lx] and tx[lx+1]. Same with y def fpbisp_(tx, ty, k, c, argx, lx, argy, ly): wx = [None] * 100 wy = [None] * 100 h__ = fpbspl_(tx, k, argx, lx) for j in range(1, (k+1)+1): wx[1 + j] = h__[j - 1] h__ = fpbspl_(ty, k, argy, ly) for j in range(1,(k+1)+1): wy[1 + j] = h__[j - 1] for i1 in range(1,(k+1)+1): h__[i1 - 1] = wx[1 + i1] sp = 0 for i1 in range(1,(k+1)+1): for j1 in range(1,(k+1)+1): sp += c[j1-1,i1-1] * h__[i1 - 1] * wy[1 + j1] return sp mrcal-2.4.1/analyses/splines/show-fisheye-grid.py 0000775 0000000 0000000 00000010442 14563016623 0022031 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Evaluate 2D griddings of a camera view I'm interested in gridding the observation vectors for projection. These observation vectors have length-1, so they're 3D quantities that live in a 2D manifold. I need some sort of a 2D representation so that I can convert between this representation and the 3D vectors without worrying about constraints. I'd like a rectangular 2D gridding of observation vectors to more-or-less map to a gridding of projected pixel coordinates: I want both to be as rectangular as possible. This tool grids the imager, and plots the unprojected grid for a number of different 3d->2d schemes. I take in an existing camera mode, and I grid its imager. I unproject each pixel, and reproject back using the scheme I'm looking at. Most schemes are fisheye projections described at https://en.wikipedia.org/wiki/Fisheye_lens mrcal-show-distortion-off-pinhole --radial also compares a given model with fisheye projections ''' import sys import argparse import re import os def parse_args(): parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('scheme', type=str, choices=('pinhole', 'stereographic', 'equidistant', 'equisolidangle', 'orthographic'), help='''Type of fisheye model to visualize. For a description of the choices see https://en.wikipedia.org/wiki/Fisheye_lens''') parser.add_argument('model', type=str, help='''Camera model to grid''') args = parser.parse_args() return args args = parse_args() # arg-parsing is done before the imports so that --help works without building # stuff, so that I can generate the manpages and README import numpy as np import numpysane as nps import gnuplotlib as gp testdir = os.path.dirname(os.path.realpath(__file__)) # I import the LOCAL mrcal since that's what I'm testing sys.path[:0] = f"{testdir}/../..", import mrcal @nps.broadcast_define(((3,),), (2,)) def project_simple(v, d): k = d[4:] fxy = d[:2] cxy = d[2:4] x,y = v[:2]/v[2] r2 = x*x + y*y r4 = r2*r2 r6 = r4*r2 a1 = 2*x*y a2 = r2 + 2*x*x a3 = r2 + 2*y*y cdist = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6 icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6) return np.array(( x*cdist*icdist2 + k[2]*a1 + k[3]*a2, y*cdist*icdist2 + k[2]*a3 + k[3]*a1 )) * fxy + cxy @nps.broadcast_define(((3,),), (2,)) def project_radial_numdenom(v, d): k = d[4:] fxy = d[:2] cxy = d[2:4] x,y = v[:2]/v[2] r2 = x*x + y*y r4 = r2*r2 r6 = r4*r2 a1 = 2*x*y a2 = r2 + 2*x*x a3 = r2 + 2*y*y return np.array((1 + k[0]*r2 + k[1]*r4 + k[4]*r6, 1 + k[5]*r2 + k[6]*r4 + k[7]*r6)) try: m = mrcal.cameramodel(args.model) except: print(f"Couldn't read '{args.model}' as a camera model", file=sys.stderr) sys.exit(1) W,H = m.imagersize() Nw = 40 Nh = 30 # shape (Nh,Nw,2) xy = \ nps.mv(nps.cat(*np.meshgrid( np.linspace(0,W-1,Nw), np.linspace(0,H-1,Nh) )), 0,-1) fxy = m.intrinsics()[1][0:2] cxy = m.intrinsics()[1][2:4] # shape (Nh,Nw,2) v = mrcal.unproject(np.ascontiguousarray(xy), *m.intrinsics()) v0 = mrcal.unproject(cxy, *m.intrinsics()) # shape (Nh,Nw) costh = nps.inner(v,v0) / (nps.mag(v) * nps.mag(v0)) th = np.arccos(costh) # shape (Nh,Nw,2) xy_rel = xy-cxy # shape (Nh,Nw) az = np.arctan2( xy_rel[...,1], xy_rel[..., 0]) if args.scheme == 'stereographic': r = np.tan(th/2.) * 2. elif args.scheme == 'equidistant': r = th elif args.scheme == 'equisolidangle': r = np.sin(th/2.) * 2. elif args.scheme == 'orthographic': r = np.sin(th) elif args.scheme == 'pinhole': r = np.tan(th) else: print("Unknown scheme {args.scheme}. Shouldn't happen. argparse should have taken care of it") mapped = xy_rel * nps.dummy(r/nps.mag(xy_rel),-1) gp.plot(mapped, tuplesize=-2, _with = 'linespoints', title = f"Gridded model '{args.model}' looking at pinhole unprojection with z=1", xlabel = f'Normalized {args.scheme} x', ylabel = f'Normalized {args.scheme} y', square = True, wait = True) mrcal-2.4.1/analyses/splines/verify-interpolated-surface.py 0000775 0000000 0000000 00000011151 14563016623 0024114 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 f'''Observe the interpolation grid implemented in the C code This is a validation of mrcal.project() ''' import sys import os import numpy as np import numpysane as nps import gnuplotlib as gp testdir = os.path.dirname(os.path.realpath(__file__)) # I import the LOCAL mrcal since that's what I'm testing sys.path[:0] = f"{testdir}/../..", import mrcal order = 3 Nx = 11 Ny = 8 fov_x_deg = 200 # more than 180deg imagersize = np.array((3000,2000)) fxy = np.array((2000., 1900.)) cxy = (imagersize.astype(float) - 1.) / 2. # I want random control points, with some (different) bias on x and y controlpoints_x = np.random.rand(Ny, Nx) * 1 + 0.5 * np.arange(Nx)/Nx controlpoints_y = np.random.rand(Ny, Nx) * 1 - 0.9 * np.arange(Nx)/Nx # to test a delta function # controlpoints_y*=0 # controlpoints_y[4,5] = 1 # The parameters vector dimensions appear (in order from slowest-changing to # fastest-changing): # - y coord # - x coord # - fx/fy parameters = nps.glue( np.array(( fxy[0], fxy[1], cxy[0], cxy[1])), nps.mv(nps.cat(controlpoints_x,controlpoints_y), 0, -1).ravel(), axis = -1 ) # Now I produce a grid of observation vectors indexed on the coords of the # control-point arrays # The index into my spline. ixy has shape (Ny,Nx,2) and contains (x,y) rows Nw = Nx*5 Nh = Ny*5 x_sampled = np.linspace(1,Nx-2,Nh) y_sampled = np.linspace(1,Ny-2,Nw) ixy = \ nps.mv( nps.cat(*np.meshgrid( x_sampled, y_sampled )), 0,-1) ##### this has mostly been implemented in mrcal_project_stereographic() and ##### mrcal_unproject_stereographic() # Stereographic projection function: # p = xyz # rxy = mag(xy) # th = atan2(rxy, z) # u = tan(th/2) * 2. * xy/mag(xy) # q = (u + deltau) * fxy + cxy # # I look up deltau in the splined surface. The index of that lookup is linear # (and cartesian) with u # # So ixy = u * k + (Nxy-1)/2 # # ix is in [0,Nx] (modulo edges). one per control point # # Note that the same scale k is applied for both the x and y indices. The # constants set the center of the spline surface to x=0 and y=0 # # The scale k is set by fov_x_deg (this is at y=0): # # ix_margin = tan(-fov_x_deg/2/2) * 2. * k + (Nx-1)/2 ---> # k = (ix_margin - (Nx-1)/2) / (tan(fov_x_deg/2/2) * 2) # # I want to compute p from (ix,iy). p is unique up-to scale. So let me # arbitrarily set mag(xy) = 1. I define a new var # # jxy = tan(th/2) xy ---> # jxy = (ixy - (Nxy-1)/2) / (2k) # # jxy = tan(th/2) xy # = (1 - cos(th)) / sin(th) xy # = (1 - cos(atan2(1, z))) / sin(atan2(1, z)) xy # = (1 - z/mag(xyz)) / (1/mag(xyz)) xy = # = (mag(xyz) - z) xy = # # mag(jxy) = (mag(xyz) - z) # = sqrt(z^2+1) - z # # Let h = sqrt(z^2+1) + z -> # mag(jxy) h = 1 # h - mag(jxy) = 2z # ---> z = (h - mag(jxy)) / 2 = (1/mag(jxy) - mag(jxy)) / 2 if order == 3: # cubic splines. There's exactly one extra control point on each side past # my fov. So ix_margin = 1 ix_margin = 1 else: # quadratic splines. 1/2 control points on each side past my fov ix_margin = 0.5 k = (ix_margin - (Nx-1)/2) / (np.tan(-fov_x_deg*np.pi/180./2/2) * 2) jxy = (ixy - (np.array( (Nx, Ny), dtype=float) - 1.)/2.) / k / 2. mjxy = nps.mag(jxy) z = (1./mjxy - mjxy) / 2. xy = jxy / nps.dummy(mjxy, -1) # singular at the center. Do I care? p = nps.glue(xy, nps.dummy(z,-1), axis=-1) mxy = nps.mag(xy) # Bam. I have applied a stereographic unprojection to get 3D vectors that would # stereographically project to given spline grid locations. I use the mrcal # internals to project the unprojection, and to get the focal lengths it ended # up using. If the internals were implemented correctly, the dense surface of # focal lengths should follow the sparse surface of spline control points lensmodel_type = f'LENSMODEL_SPLINED_STEREOGRAPHIC_order={order}_Nx={Nx}_Ny={Ny}_fov_x_deg={fov_x_deg}' q = mrcal.project(np.ascontiguousarray(p), lensmodel_type, parameters) th = np.arctan2( nps.mag(p[..., :2]), p[..., 2]) uxy = p[..., :2] * nps.dummy(np.tan(th/2)*2/nps.mag(p[..., :2]), -1) deltau = (q-cxy) / fxy - uxy deltaux,deltauy = nps.mv(deltau, -1,0) gp.plot3d( (deltauy, dict( _with='lines', using=f'($1*{x_sampled[1]-x_sampled[0]}+{x_sampled[0]}):($2*{y_sampled[1]-y_sampled[0]}+{y_sampled[0]}):3' )), (controlpoints_y, dict( _with='points pt 7 ps 2' )), xlabel='x control point index', ylabel='y control point index', title='Deltau-y', squarexy=True, ascii=True, wait=True) mrcal-2.4.1/analyses/triangulation/ 0000775 0000000 0000000 00000000000 14563016623 0017321 5 ustar 00root root 0000000 0000000 mrcal-2.4.1/analyses/triangulation/study.py 0000775 0000000 0000000 00000027716 14563016623 0021063 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3 r'''Study the precision and accuracy of the various triangulation routines''' import sys import argparse import re import os def parse_args(): parser = \ argparse.ArgumentParser(description = __doc__, formatter_class=argparse.RawDescriptionHelpFormatter) parser.add_argument('--Nsamples', type=int, default=100000, help='''How many random samples to evaluate. 100000 by default''') group = parser.add_mutually_exclusive_group(required = True) group.add_argument('--ellipses', action='store_true', help='''Display the ellipses and samples in the xy plane''') group.add_argument('--ranges', action='store_true', help='''Display the distribution of the range''') parser.add_argument('--samples', action='store_true', help='''If --ellipses, plot the samples ALSO. Usually this doesn't clarify anything, so the default is to omit them''') parser.add_argument('--cache', type=str, choices=('read','write'), help=f'''A cache file stores the recalibration results; computing these can take a long time. This option allows us to or write the cache instead of sampling. The cache file is hardcoded to a cache file (in /tmp). By default, we do neither: we don't read the cache (we sample instead), and we do not write it to disk when we're done. This option is useful for tests where we reprocess the same scenario repeatedly''') parser.add_argument('--observed-point', type = float, nargs = 3, default = ( 5000., 300., 2000.), help='''The camera0 coordinate of the observed point. Default is ( 5000., 300., 2000.)''') parser.add_argument('--title', type=str, default = None, help='''Title string for the plot. Overrides the default title. Exclusive with --extratitle''') parser.add_argument('--extratitle', type=str, default = None, help='''Additional string for the plot to append to the default title. Exclusive with --title''') parser.add_argument('--hardcopy', type=str, help='''Write the output to disk, instead of an interactive plot''') parser.add_argument('--terminal', type=str, help=r'''gnuplotlib terminal. The default is good almost always, so most people don't need this option''') parser.add_argument('--set', type=str, action='append', help='''Extra 'set' directives to gnuplotlib. Can be given multiple times''') parser.add_argument('--unset', type=str, action='append', help='''Extra 'unset' directives to gnuplotlib. Can be given multiple times''') args = parser.parse_args() if args.title is not None and \ args.extratitle is not None: print("--title and --extratitle are exclusive", file=sys.stderr) sys.exit(1) return args args = parse_args() import numpy as np import numpysane as nps import gnuplotlib as gp import pickle import os.path # I import the LOCAL mrcal scriptdir = os.path.dirname(os.path.realpath(__file__)) sys.path[:0] = f"{scriptdir}/../..", import mrcal ############ bias visualization # # I simulate pixel noise, and see what that does to the triangulation. Play with # the geometric details to get a sense of how these behave model0 = mrcal.cameramodel( intrinsics = ('LENSMODEL_PINHOLE', np.array((1000., 1000., 500., 500.))), imagersize = np.array((1000,1000)) ) model1 = mrcal.cameramodel( intrinsics = ('LENSMODEL_PINHOLE', np.array((1100., 1100., 500., 500.))), imagersize = np.array((1000,1000)) ) # square camera layout t01 = np.array(( 1., 0.1, -0.2)) R01 = mrcal.R_from_r(np.array((0.001, -0.002, -0.003))) Rt01 = nps.glue(R01, t01, axis=-2) p = np.array(args.observed_point) q0 = mrcal.project(p, *model0.intrinsics()) sigma = 0.1 cache_file = "/tmp/triangulation-study-cache.pickle" if args.cache is None or args.cache == 'write': v0local_noisy, v1local_noisy,v0_noisy,v1_noisy,_,_,_,_ = \ mrcal.synthetic_data. \ _noisy_observation_vectors_for_triangulation(p,Rt01, model0.intrinsics(), model1.intrinsics(), args.Nsamples, sigma = sigma) p_sampled_geometric = mrcal.triangulate_geometric( v0_noisy, v1_noisy, t01 ) p_sampled_lindstrom = mrcal.triangulate_lindstrom( v0local_noisy, v1local_noisy, Rt01 ) p_sampled_leecivera_l1 = mrcal.triangulate_leecivera_l1( v0_noisy, v1_noisy, t01 ) p_sampled_leecivera_linf = mrcal.triangulate_leecivera_linf( v0_noisy, v1_noisy, t01 ) p_sampled_leecivera_mid2 = mrcal.triangulate_leecivera_mid2( v0_noisy, v1_noisy, t01 ) p_sampled_leecivera_wmid2 = mrcal.triangulate_leecivera_wmid2(v0_noisy, v1_noisy, t01 ) q0_sampled_geometric = mrcal.project(p_sampled_geometric, *model0.intrinsics()) q0_sampled_lindstrom = mrcal.project(p_sampled_lindstrom, *model0.intrinsics()) q0_sampled_leecivera_l1 = mrcal.project(p_sampled_leecivera_l1, *model0.intrinsics()) q0_sampled_leecivera_linf = mrcal.project(p_sampled_leecivera_linf, *model0.intrinsics()) q0_sampled_leecivera_mid2 = mrcal.project(p_sampled_leecivera_mid2, *model0.intrinsics()) q0_sampled_leecivera_wmid2 = mrcal.project(p_sampled_leecivera_wmid2, *model0.intrinsics()) range_sampled_geometric = nps.mag(p_sampled_geometric) range_sampled_lindstrom = nps.mag(p_sampled_lindstrom) range_sampled_leecivera_l1 = nps.mag(p_sampled_leecivera_l1) range_sampled_leecivera_linf = nps.mag(p_sampled_leecivera_linf) range_sampled_leecivera_mid2 = nps.mag(p_sampled_leecivera_mid2) range_sampled_leecivera_wmid2 = nps.mag(p_sampled_leecivera_wmid2) if args.cache is not None: with open(cache_file,"wb") as f: pickle.dump((v0local_noisy, v1local_noisy, v0_noisy, v1_noisy, p_sampled_geometric, p_sampled_lindstrom, p_sampled_leecivera_l1, p_sampled_leecivera_linf, p_sampled_leecivera_mid2, p_sampled_leecivera_wmid2, q0_sampled_geometric, q0_sampled_lindstrom, q0_sampled_leecivera_l1, q0_sampled_leecivera_linf, q0_sampled_leecivera_mid2, q0_sampled_leecivera_wmid2, range_sampled_geometric, range_sampled_lindstrom, range_sampled_leecivera_l1, range_sampled_leecivera_linf, range_sampled_leecivera_mid2, range_sampled_leecivera_wmid2), f) print(f"Wrote cache to {cache_file}") else: with open(cache_file,"rb") as f: (v0local_noisy, v1local_noisy, v0_noisy, v1_noisy, p_sampled_geometric, p_sampled_lindstrom, p_sampled_leecivera_l1, p_sampled_leecivera_linf, p_sampled_leecivera_mid2, p_sampled_leecivera_wmid2, q0_sampled_geometric, q0_sampled_lindstrom, q0_sampled_leecivera_l1, q0_sampled_leecivera_linf, q0_sampled_leecivera_mid2, q0_sampled_leecivera_wmid2, range_sampled_geometric, range_sampled_lindstrom, range_sampled_leecivera_l1, range_sampled_leecivera_linf, range_sampled_leecivera_mid2, range_sampled_leecivera_wmid2) = \ pickle.load(f) plot_options = {} if args.set is not None: plot_options['set'] = args.set if args.unset is not None: plot_options['unset'] = args.unset if args.hardcopy is not None: plot_options['hardcopy'] = args.hardcopy if args.terminal is not None: plot_options['terminal'] = args.terminal if args.ellipses: # Plot the reprojected pixels and the fitted ellipses data_tuples = \ [ *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_geometric, 'geometric' ), *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_lindstrom, 'lindstrom' ), *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_leecivera_l1, 'lee-civera-l1' ), *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_leecivera_linf, 'lee-civera-linf' ), *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_leecivera_mid2, 'lee-civera-mid2' ), *mrcal.utils._plot_args_points_and_covariance_ellipse( q0_sampled_leecivera_wmid2,'lee-civera-wmid2' ), ] if not args.samples: # Not plotting samples. Get rid of all the "dots" I'm plotting data_tuples = [ t for t in data_tuples if \ not (isinstance(t[-1], dict) and \ '_with' in t[-1] and \ t[-1]['_with'] == 'dots') ] if args.title is not None: title = args.title else: title = 'Reprojected triangulated point' if args.extratitle is not None: title += ': ' + args.extratitle gp.plot( *data_tuples, ( q0, dict(_with = 'points pt 3 ps 2', tuplesize = -2, legend = 'Ground truth')), square = True, wait = 'hardcopy' not in plot_options, title = title, **plot_options) elif args.ranges: # Plot the range distribution range_ref = nps.mag(p) if args.title is not None: title = args.title else: title = "Range distribution" if args.extratitle is not None: title += ': ' + args.extratitle gp.plot( nps.cat( range_sampled_geometric, range_sampled_lindstrom, range_sampled_leecivera_l1, range_sampled_leecivera_linf, range_sampled_leecivera_mid2, range_sampled_leecivera_wmid2 ), legend = np.array(( 'range_sampled_geometric', 'range_sampled_lindstrom', 'range_sampled_leecivera_l1', 'range_sampled_leecivera_linf', 'range_sampled_leecivera_mid2', 'range_sampled_leecivera_wmid2' )), histogram=True, binwidth=200, _with='lines', _set = f'arrow from {range_ref},graph 0 to {range_ref},graph 1 nohead lw 5', wait = 'hardcopy' not in plot_options, title = title, **plot_options) else: raise Exception("Getting here is a bug") mrcal-2.4.1/autodiff.hh 0000664 0000000 0000000 00000031370 14563016623 0014750 0 ustar 00root root 0000000 0000000 // Copyright (c) 2017-2023 California Institute of Technology ("Caltech"). U.S. // Government sponsorship acknowledged. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 #pragma once /* Automatic differentiation routines. Used in poseutils-uses-autodiff.cc. See that file for usage examples */ // Apparently I need this in MSVC to get constants #define _USE_MATH_DEFINES #include