As mentioned previously, an important philosophy of the software is minimize the questions asked of a user. As use of this product increases, online-only lists will be added which automatically answer some prompt questions. An example of this is local weather conditions, which has an effect on many routines. In the examples/travel lists, the weather conditions affect what types of clothes and outerwear to pack. An automatic response will not only save the user from having to answer another prompt, but also having to check the weather online. As AI tools become better at answering natural language queries, these automated lists will become more useful.
It is expected that this feature will grow to include many conditionals. For example the online status of other employees can be automatically taken into account when generating a checklist for a routine that involves multiple contributors. It is anticipated that as this network of online automated lists becomes more useful, it will stimulate the growth of the network with more user-contributed content.
Usefulness in business routines, primarily those performed in the field or “on-the-go”, has been the primary objective. However, the flexibility and customization of this list language also makes it useful in an office environment. And as the example lists show, it is also useful for the everyday routines that people go through in their personal lives. A consumer version may be offered in the future.
The language's flexibility allows lists to be updated in order to match the things that inevitably change. Business processes are constantly being modified, and new practices implemented. Updating a list to reflect such changes is simple, but this creates the potential problem of introducing unwanted errors in the generated checklist. Such unexpected checklist errors might be minor, but even the omission of one important task could lead to a much bigger problem. It is necessary to be able to verify the final result against previous versions, to ensure that the only things that changed are what were intended.
The solution to this involves the author adding a set of test scenarios where different responses are given to the prompts. This process is done when the lists are first written; it tests the many possible paths of the routine. After examining the results, the output of each scenario is certified as a standard. Later, whenever a change(s) is made to the set of lists, the testing step runs. The automated testing application runs through the prompts with the pre-recorded prompt answers, and displays the differences for the author. The (usually small) differences are highlighted to make the analysis quick and easy. If the differences match the intentions of the author who made the changes, then the results become the new standard to be used next time. If there are unexpected results, the differences are clearly shown to make it easy to fix the problem.
This automated testing has already been implemented and works very well, but is not included in the current online interface.