Date: prev next · Thread: first prev next last
2020 Archives by date, by thread · List index

Hi Lothar,

a) (Completeness of the specification) Isn't it appropriate to assign the right Version of LibO 
and the UNO API to it, at least what is the version
(or with what LibO Version delivered)  the "WatchCode"-Implementation should be used for the UI 
work? Which version of the XRAY and MRI tool is here
relevant, at least say "the latest" with a hint for a source for them.

I think the idea is that the work is developed on LibreOffice master, so
it gets released in the next major version after the work is done. This
is how all previous tenders were delivered. The result is part of
LibreOffice itself, so specifying a LibreOffice version adds no value.

XRAY and MRI are just examples of what's possible for an inspection
tool, so I would consider their version as not relevant.

b) (Feature request) I miss this great feature to have a code autocompletion, for example in VS 
you can set the "." as referenciator and that get the
possible services or DOM Tree alternatives or... and also complete the parameter part when 
hitting return (or is this meant with the Copy & Paste feature?)

My understanding is that we currently provide no good autocompletion
APIs, and such an inspection tool would build on top of it. If you add
autocompletion to the scope, it can easily double the amount of needed
work, so I would carefully avoid that.

c) (Completeness of the specification) It is mentioned, that "everywhere where possible" to lean 
on automatic testing. Well, to be honest, this is a
huge field. Shouldn't we specify this a little bit more in detail, what we do expect here? Are 
there automatic test tools we are already using which
we want to see or for which we want to have the automation scripts or ...?

I believe the current wording was used for previous tenders already,
without problems. The idea is that whenever a sub-task is done
(something gets fixed or implemented), it should be considered to add a
test for it. It's hard to specify this more than this: if you add
quantity requirements, then it's easy to add a lot of useless tests, and
it's not easy to measure test quality with numbers. :-)

I would prefer a reasonable amount of good tests, rather than a lot of
useless tests. The effort needed to add tests is also different for each
& every case: sometimes it's a shame that a test is not added, sometimes
it would be a heroic effort to cover some behavior with an automated

d) (Details in the proposal) I would also expect a detailed estimation in the sense that it is 
not just a figure but at least one for each mentioned
feature in the mandatory as well as in the optional part. If they are proposing other features 
(not mentioned here) they should do it as well with a
figure for it. Is it mentioned anywhere?

It is possible it's hard to compare proposals if the proposals have
optional features. One consistent way is to asssume you order / not
order everything optional. I imagine if the proposal is detailed enough,
there is a brief description of each sub-task, how it would be done --
then you can get the impression at the end that the bidder did their
homework, and the number at the bottom of the offer is not just a



To unsubscribe e-mail to:
Posting guidelines + more:
List archive:
Privacy Policy:


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.