Some tests are testing external factors
The most recently added tests in test_translator.py
and test_main.py
, in order to test the translation features and the whole TransLaTeX pipeline, use assertions on the translation API responses.
For example:
# from tests/test_main.py
def test_translation_with_no_document_env():
source = r"\section{Hello World}" # Here
translated = translate(source)
assert translated == r"\section{Bonjour le monde}" # Here asserting a hard coded translation
# from tests/test_translator.py
def test_translate_small(small_trans):
small_trans.translate(service=TEST_SERVICE)
assert small_trans.translated_string == dedent("""
[0-4]
[0-3]{[0-1] Hello world}
[0-2]
[0-5]
""") # Here asserting both a translation
# and the full response including whitespace
I think this behavior makes these tests fragile. If the tests were to use a different API or upgrade the used API's version, the tests could break due to (slightly or hugely) different responses.
Moreover, I think we should stick mostly to testing the core program's behavior and logic. External factors are always present and out of our control. We should ensure the core is functional and works as expected.
So maybe we could have tests that:
- don't do any API calls by using the
--dry-run
option, - ones that do API calls but only test if the calls are correctly sent/received/decoded/error handled and,
- ones that execute the whole pipeline but only use a LaTeX compiler to see if the output still compiles without any errors after the translation instead of comparing the response string to a hard coded translation inside the test
so that they are more robust and provide more insight on the core program's expected behaviors. This way we can also have the tests run with any of the integrated APIs and not just hard code only one and tailor the tests around its responses.
I made a first attempt at this issue (before doing anything substantial) in this merge request with this commit by making these tests optional so that the test suite can run without problems in any developer environment and not just on cassandre
1.
We can settle with this or rework the tests as per the above conditions and remove the optional test logic making them all mandatory but working universally.
-
Looks like some tests need a Google API key which makes them fail if the developer doesn't have one. This is a none issue since a key is set on
cassandre
making the tests run without problems. Secondly, I found out that the <code data-sourcepos="38:253-38:262">IRMA DLMDS</code> API is internal to Unistra only and not accessible from the outside world making the tests that may use it fail for anyone external to Unistra. This is why for the time being I made these tests optional. They run oncassandre
just fine and succeed but they are not universal. It was a kind of "works on my machine" type of situation😄 .↩