Saturday, September 8, 2012

On Testing

More than decade ago, I spend a year doing quality assurance at a big and successful smart card company. It was one of the more intellectually stimulating jobs I've had in the software industry and I ended up developing a scripting language, sadly long lost, to do test automation. Doing testing right can be much harder than writing the software being tested. Especially when you're after 100% quality as is the case with software burned on the chips of millions of smart cards that would have to be thrown away in case of a serious bug discovered post-production. Companies aiming at this level of quality get ISO 900x certified to show off to clients. To get certified, they have to show a QA process that guarantees, to an acceptable degree of confidence, that products delivered work, that the organization is solid, knowledge is preserved etc. etc. The interesting part that I'd like to share with you is the specific software QA approach. It did involve an obscene amount of both documentation and software artifacts that had to be produced in a very rigid formal setting, but the philosophy behind spec-ing out the tests was sound, practical and better than anything I've seen since. 

Dijkstra famously said that testing can only prove the presence of errors, never the absence of errors. True that. To have 100% guarantee that a program works, one would need to produce a mathematical proof of its correctness. Perhaps not sufficient...as Knuth not less famously noted, when sending a program to a colleague, that the program must be used with caution because he only proved it correct, but never tested it. In either case, ultimately the goal is to gain confidence in the quality of a piece of software. We do that by presenting a strong, very, very convincing argument that the program works. 

When discussing testing methodology people generally talk about automated vs. manual testing, test-driven development where test cases are developed before the code or "classic" testing where they are done after development, but rarely do I see people mindful of how tests should be specified. The term test itself is used rather ambiguously to mean the action of testing, or the specification or the process or the development phase. And in some contexts a test means a test case which refers to an input data set, or it refers to an automated program (e.g. a jUnit test case). So let's agree that, whether you code it up or do it manually, a test case consists of a sequence of steps taken to interact with the software being tested and verify the output with the goal of ensuring that said software behaves as expected. So how do you go about deciding what this sequence of steps should be? In other words, how do you gather test requirements? 

Think of it in dialectical terms - you're trying to convince a skeptic that a program works correctly. First you'd have to agree what it means for that program to work correctly. Well, they say, it must match the requirements. So you start by reading all requirements and translating that last statement ("it must match the requirements" ) for each one of them into a corresponding set of test criteria. Naturally, the more detailed the requirements are, the easier that process is. In an agile setting, you might be translating user stories into test criteria. Let's have a simple running example:

Requirement:

Login form should include a captcha protection

Test Criteria:

  • C1 - the login form should display a string as an image that's hard to recognize by a program.
  • C2 - the login form should include an input field that must match the string in the captcha image for login to succeed. 

Notice how the test criteria explicitly state under what conditions one can say that a program works. One can list more criteria with further detail, stating what happens if the captcha doesn't match. What happens after n number of repeats etc. Also, this should make it clear that test criteria are not actual tests. They are not something that can be executed (manually or automatically). In fact, they are to a test program what requirements are to the software being QA-ed. And as with conventional requirements, the more clear you are on your test criteria, the better chance you have in developing adequate tests. 

The crucial point is that when you write a test case, you want to make sure that it is with a well-defined purpose, that it serves as a demonstration that an actual test criterion has been met. And this is what's missing in 90% of testing efforts (well, to be sure this is just anecdotal evidence). People write or perform tests simply for the sake of trying out things. Tests accumulate and if you have a lot of them, it makes it look like you're in good shape. But that's not necessarily the case because tests can only prove the presence of errors, not their absence. To convince your dialectical opponent of the absence of errors, given the agreed upon list of criteria, you'd have to show how your tests prove that the criteria have been met. In other words, you want to ensure that all your test criteria have been covered by appropriate test cases - for each test criterion there is at least one test case that, when successful, shows that this criterion is satisfied. A convenient way to do that is to create a matrix where you list all your criteria in the rows and all your test cases in the columns and checkmark a given cell whenever the test case covers the corresponding criterion, where "covers" means that if the test case succeeds one can be confident that the criterion is met. This implies that the test case itself will have to include all necessary verification steps. Continuing with our simple example, suppose you've developed a few test cases:

  • T1 - test succesful login
  • T2 - test failed login for all 3 fields, bad user, bad password, bad captcha
  • T3 - test captcha quality by running image recognition algos on captcha
 T1T2T3T4T5......
C1  X    
C2XX     
...       

A given test case may cover a certain aspect of the program, but you'd put a checkmark only if it actually verifies the criteria in question. For instance T3 would be loading a login page, but it won't be testing actual login. Similarly, T1 and T2 can observe the captcha, but they won't evaluate its quality. It may appear a bit laborious as an approach. In the aforementioned company, this was all documented ad nauseam. Criteria were classified as "normal", "abnormal", "stress" and what not, reflecting different types of expected behaviors and possible execution contexts. Now, I did warn you - this was a QA process aimed at living up to ISO standards. And it did. But think about the information this matrix provides you. It is a full, detailed spec of your software. It is a full inventory of your test suite. It tells what part of the program is being tested by what test. It shows you immediately if some criteria are not being covered by a test, or not covered enough. If shows you immediately if some criteria are being covered too much, i.e. if some tests are superfluous. When tests fail, it tells you exactly what behaviors of the software are not functioning properly. Recall that one of the main problems with automated testing is the explosion of code that needs to be written to achieve descent coverage. This matrix can go a long way to controlling that code explosion by keeping each test case with a relatively unique purpose. Most importantly, the matrix presents a pretty good argument for the program's correctness - you can see at a glance both how correctness has been defined (the list of criteria) and how it is demonstrated (the list of tests cross-referenced with criteria).

Reading about testing even from big industry names, I have been frequently disappointed at the lack of systematic approach to test requirements. In practice it's even worse. Developers, testers, business people in general have no idea what they are doing when testing. This includes agile teams where tests are sometimes supposed to constitute the specification of the program. That's plain wrong, first because it's code and code is way too low-level to be understood by all stakeholders, hence it can't be a specification that can be agreed upon by different parties. Second, because usually the same people write both the tests and the program tested, the same bugs sneak in both places and never get discovered, the same possibly wrong understanding of the desired behavior is found in both places. So expressing the quality argument (i.e. with the imaginary dialectical adversary) simply in the form of test cases can't cut it. 

That said, I wouldn't advocate following the approach outlined above verbatim and in full detail. But I would recommend having the mental picture of that Criteria x Tests matrix as guide to what you're doing. And if you're building a regression test suite, and especially if some of the tests are manual, it might be worth your while spelling it out in the corporate wiki somewhere.

Boris

PS This is original content from kobrix.blogspot.com

Wednesday, July 18, 2012

Dealing With Change - Events

Events are a great way to manage change in a complex software made up of many components. When you have decoupled software entities that need to be notified about changes, it's easier the represent the change itself explicitly, as an event entity, so that producers (originators) and consumers (receivers) of the event don't have to know about each other. This leads to fewer connections in the graph of dependencies between the software components comprising the system. This blog post documents the event framework in the Sharegov CiRM platform. This is a first draft and the framework is expected to be evolve of course.

Overview

Within the context of software, events essentially model data changes at various locations. So an event framework needs to define how events are represented, what kinds of data changes are supported, what kind of information would an event entity contain as well as the gluing infrastructure that allows components to publish events and others to consume them. 

Events Ontology Model

The various types of events are modelled in the ontology under Event->SoftwareEventType. A seemingly natural way to model events in OWL is for each event occurrence to be an individual and the various event types to be described via punned classes. Since we don't record event occurrences anywhere, we don't really need to represent events as OWL individuals. So we model the types of events that can occur as OWL individuals with properties that govern their behavior to an extent and we categorize those event types into a few broad categories. One one hand we have events processed entirely at the client and on the other we have entity related events that can be processed on the server or result in server<->client communication. The client only events help in connecting otherwise decoupled client-side components and they are described lastly. The entity (i.w. OWL individuals) events are more thoroughly formalized and they are described next.

Server-Side Event Management

Event handling on the server-side is implemented by the classes in the org.sharegov.cirm.event package. The most common types of events are those that reflect a change in an entity. Such events are modelled with the SoftwareEventType->EntityChangeEvent class. Each individual belonging to that class models how a change of some kind of entity is dealt with. The "kind" of entity is specified through a DL query expression. The following properties comprise that model:

  1. hasChangeType : any suitable individual that represents the type of change, normally an instance of Activty. 
  2. hasImplementation: the fully qualified class name of an org.sharegov.cirm.event.EventTrigger implementation that is invoked to process this event occurrence. There can multiple such properties and each will be invoked in an unspecified order.
  3. hasQueryExpression: A Description Logics (DL) query expression that specifies for what types of individuals this event will be triggered. The query expression is evaluated to obtain the set of all sub-classes. Then whenever an individual change is submitted for query processing, it is checked whether it belongs to one of the sub-classes as defined by that expression. Multiple hasQueryExpression properties are allowed. 

Events are processed on the server by an org.sharegov.cirm.EventDispatcher singleton. All events defined in the ontology are loaded upon startup and the DL query expressions evaluated to create a map of OWLClass->EventTrigger. That singleton is accessed by the various services to explicitly publish events via one of the overloaded EventDispatcher.dispatch methods. 

Server to Client Events

As a lot of application logic resides in the browser, it is wise to load the relevant data beforehand in order to minimize network traffic and improve response time. This of course poses the problem of updates on the server which invalidate the data at the client. Synchronization of such updates happens through server->client events, the so called "server push".  The most efficient way to implement a server push is for the client to do what is refered to as long polling (see http://en.wikipedia.org/wiki/Push_technology) - open a connection with the server and let it timeout if the server has nothing to say, then open a new connection right away again. However, the Restlet framework we are currently using doesn't support this mode, so we had to revert to the traditional style of polling where the server returns right away if there are no events to deliver and the client polls again after a certain interval. In order to to decide which event a server should send to a particular client, the client send the timestamp of the last time it polled. The server then responds with all events timestamped with a later timestamp. Because the comparison is only relative to the client, there aren't any clock synchronization issues to worry about.

The queue of events sent to clients is implemented by the org.sharegov.cirm.event.ClientPushQueue class. Events are added to that queue by a org.sharegov.cirm.event.PushToClientEventTrigger associated to the event via the hasImplementation property in the event descriptor.

At the client, polling and event dispatching is managed by cirm.events object (see EventManager function inside that cirm.js library). To register for an event coming from the server call:

cirm.events.bind(eventIri, listenerFunction)

Call cirm.events.unbind to unregister a listener. The cirm.events also exposes startPolling, stopPolling methods and the ability to explicitly trigger an event via cirm.trigger. 

Client-side events

Such events happen entirely on the client (browser). They are triggered by a change of some value on the client and processed by some other component on the same client. Such events are categorized under the ClientSideEventType class. One case of client-side events is connecting model changes of otherwise disconnected and independent components. When two components are completely decoupled, yet a part of their models represent the same underlying real-world entity, we want a change in one model to be reflect into the other model. When we have such a model that can receive its value from another model through events, we express declaratively in the following way:

  1. Declare the event individual under ClientSideEventType class.
  2. Declare a data source individual under the EventBasedDataSource with  two properties:
    • providedBy pointing to the event created in step (1)
    • hasPropertyName specifying the name of the property in the runtime event data object that contains the model value.
  3. Add a hasDataSource property to the model individual that must be automatically updated when that event is triggered.

Pure client-side events as described in this section are not processed on the server at all. They just define the model used by the JavaScript libraries on the client to communicate between decoupled components. The event dispatching is implemented by the jQuery events mechanism rather than our cirm.events object. Perhaps we should also go through the cirm.events object here as well. However, jQuery has the advantage of scoping listeners and events to DOM elements which can be important if we have multiple instantiations of the same component at different places on a web page.

Friday, November 11, 2011

MJSON 1.1 Released

NOTE: The Library described here has grown and moved to Github with official website http://bolerio.github.io/mjson/It still remains faithful to the original goal, it's still a single Java source file, but the API has been polished and support for JSON Schema validation was included. And documentation is much more extensive.

I few months, we made an official release of a compact, minimal, concise JSON library that we called MJSON. See the JSON Library blog post for an introduction to this library. After some experience with it, some bug reports on that blog post, we have released an improved, fully backward compatible version 1.1. The original download & documentation links now point to the new version.
Pointers

Documentation: http://www.sharegov.org/mjson/doc
Download: http://www.sharegov.org/mjson/mjson.jar
Source code: http://www.sharegov.org/mjson/Json.java
Maven
The latest 1.1 release is available on Maven central:
<dependency>
    <groupId>org.sharegov</groupId>
    <artifactId>mjson</artifactId>
    <version>1.1</version>
</dependency>
Also, both 1.0 and 1.1 versions are available from our Maven repository at http://repo.sharegov.org/mvn. This is depracated and henforth we'll be publishing only on Maven central. Incude that repository in your POM or in a settings.xml profile like so:
<repositories>
  <repository>
  <releases>
  <enabled>true</enabled>
  </releases>
  <snapshots>
  <enabled>true</enabled>
  </snapshots>
  <id>sharegov</id>
  <url>http://repo.sharegov.org/mvn</url>
  </repository>
  </repositories>
Then include a dependency like so:
<dependency>
  <groupId>mjson</groupId>
  <artifactId>mjson</artifactId>
  <version>1.1</version>
  <scope>compile</scope>
</dependency>
List of Improvements

The following bugs were fixed in this new release:
  • The example from the Javadocs was missing a final 'up()' call.
  • The NumberJson implementation appropriately return true in its isNumber method now.
  • A parsing bug.
  • Some warnings were removed and explicitly disabled.
The following additional features were implemented:
  • Addition of  top-level methods: is(String, Object) for objects and is(int, Object) for arrays. Those methods return true if the given named property (or indexed element) is equal to the passed in Object as the second parameter. They return false if an object doesn't have the specified property or an index array is out of bounds. For example is(name, value) is equivalent to 'has(name) && at(name).equals(make(value))'.
  • Addition of a dup() method that will clone a given Json entity. This method will create a new object even for the immutable primitive Json types. Objects and arrays are cloned (i.e. duplicated) recursively.
  • Addition of a Factory interface that allows plugging of your own implementation of Json entities (both primitives and aggregates) as well as customized mapping of Java objects to Json. More on this below.
The Factory Interface - Customizing MJSON

The Factory interface, declared as an inner interface, within the scope of the Json class, looks like this:
public static interface Factory
{
Json nil();
Json bool(boolean value);
Json string(String value);
Json number(Number value);
Json object();
Json array();
Json make(Object anything);
}

You can implement this interface if you need to customize how the Json types are actually represented. For instance, objects are represented using standard Java HashMap. But you may want to has a different representation, say a LinkedHashMap or a more efficient variant, optimizing for strings (say, a Trie based map of some sorts). Or you may want strings to be case-insensitive in which case you'd have a Json derived class representing strings, but whose equals method will actually do equalsIgnoreCase etc. 

The make allows you to customize how arbitrary Java objects are translated into Json. It should be easy for example to implement a make method that handles Java beans with introspection, which is something that we don't want by default as part of the API.

The methods in that interface are used internally every time a new Json instance has to be constructed, either from a known type, as a default empty structure or from an arbitrary Java type. Thus you can pretty much customize the representation any way you like while relying on the same simple API.

The default implementation of this factory is public: Json.DefaultFactory. Therefore you can extend that default implementation and only customize certain aspects of the Json representation. 

That's it for this release. Enjoy!

Cheers,
Boris

Tuesday, August 23, 2011

From Rules to Workflows

Some business object types have workflows associated with them. A business object type is identified with an OWL class.  A workflow defines the process through which a business object of that type goes from its creation up to its deletion (or archiving). This can be an enterprise-wide business process involving many other systems and human actors, or a simple one-time interaction with an end user.
In our framework, a business object is always modeled as an OWL individual and it's state is always modeled as OWL properties. Furthermore, an object has a lifetime with a beginning and an end. The process governing that lifetime is specified through a set of SWRL rules. The SWRL rules are dynamically converted into a business process that gets executed on the business object. This blog entry outlines the algorithm that translates sets of rules into a process workflow. Details will be given in future blog entries.
Assumptions
Here are what assumptions made about the set of rules defining a business object workflow:
  1. The rules are defined in a single ontology with an IRI that follows the naming pattern http://www.miamidade.gov/swrl/<OWL Class> where <OWL Class> is the OWL classname of the business object.
  2. At least one rule must have a goal atom in its head. Goal atoms are specific to the business object type.
  3. Unless specifically stated, all variables are local to the rules in which they appear.
  4. A global variable named "bo" is reserved and refers to the business object on which the set of rules operates.  
Sketch
If no rule has a goal atom in its conclusion, then no workflow is created at all.
  1. First rules are evaluated iteratively against the current state of the business object. When evaluating a rule, the atoms in its body (i.e. its premises) are evaluated and if they are all true, then the atoms in its head are asserted in the current BO ontology. If any of those newly asserted atoms is actually and end-goal, then no workflow is constructed because there's nothing to do. If at least one of the atoms in a rule's body is known to be false, then the rule is subsequently ignored. If there are unknown atoms in the rule's body (but no false ones), the whole rule is deemed "unknown" and will be included in the construction of the workflow. When there are no new atom assertions from any rule during the current iteration, the evaluation process is complete. 
  2. At the end of the evaluation process, we are left with a set of "unresolved" rules, that is rules where we have unknown atoms in their bodies. Each rule is annotated with extra information and wrapped in a class called AppliedRule that contains values of instantiated variables and dependencies between the atoms within the rule. Such dependencies are infered through some heuristics and assumptions described below.
  3. At this stage, we have several unresolved rules remaining some of which contain a goal atom in their head (if none of them does, then the workflow is empty). Starting from those rules containing goal atoms in their head (i.e. their conclusion) and going in a backward-chaining fashion, we enumerate all possible logical paths to satisfy the rules and reach their conclusion. Each unknown atom X of the currently examined rule body is added to the deduction path and then recursively unknown atoms from rules where X appears in the conclusion. During the enumeration process each such atom is converted into a WorkflowPathElement instance, which is a helper class that holds an atom and how it depends on other atoms in the eventually constructed workflow. So a dependency graph between the unknown SWRL atoms is created, where the edges are SWRL variables that get propagated from atom to atom. A logical dependency b/w an atom in the rule's body and an atom in that rule's head is represented by a predefined variable implies SWRL variable. This dependency graph is important in that it defines which task in the workflow must be evaluated before which other task.
  4. The next stage converts each logical path found in the previous stage into a sequence of workflow tasks to be executed to reach the goal. The sequence is constructed starting from WorkflowPathElements that don't have dependencies and then adding their dependencies recursively as subsequent tasks to be executed. In addition to that, tasks to assert all possible conclusions are added as soon as possible. That is, every time a task is added to the current sequence, a search is made to find all rules whose premises would be satisfied were all tasks up to this point to succeed, then the conclusions of those rules are added as AssertAtomTasks. In this way, every logical path deducing an end goal is converted into a sequence of steps where no step downstream in the sequence depends on a step upstream (for the value of a variable or logically) and where all possible conclusions from unresolved rules are asserted as soon as the premises of those rules become true.
  5. At this point, we have a bunch of linear sequence of tasks. Each of those sequence can be executed step by step to reach an end goal for the business object, but each intermediary step has the potential of failing. When a step fails, we want to branch to another sequence that may succeed, but we don't want to repeat the execution of steps that have already succeeded and we don't want to branch to a sequence that we know will fail. The construction of the workflow from this set of sequence is based on them being ordered appropriately. Each task in a sequence is assigned a cost and independent tasks within a sequence are thus naturally ordered by their cost. Dependent tasks are assigned costs as the sum of their dependencies so at the end the order of all tasks in a workflow path is determined entirely by this cost number. The task paths themselves are ordered by cost where the cost of a task path is simply the sum of the costs of all its tasks.
  6. Given this setup, the workflow is constructed as follows: start with the least costly sequence. Its first task is the starting point othe workflow. Each task, except atom assertion tasks, results in a true/false result. So add a boolean branching node after each task that branches to the next task in the current path on "true" and to the "first possible task" in subsequent paths on "false". This "first possible task" is obtained by examining each of the more costly paths in turn, and looking for a task not already on the current path and such that all of its preceeding tasks are on the current path. 
The workflow thus constructed looks like a decision tree where branching is done based on the truth of OWL axioms and whenever the truth of an OWL axiom is not known, there's a node to "find out" - prompt the user, call some software etc. In a subsequent blog I will give more details about how variable dependencies are managed, how costs are assigned to tasks so that goal paths can be ordered, and what assumptions and heuristics are being used.
This is work in progress. To handle more real-life workflow scenarios, our next steps is to represent asynchronous processes and events. The simplest strategy would be to put the whole workflow in a state of limbo, whenever there's unknown information and progress on the workflow cannot be made. When in such state, the operations service accepts changes to the BO ontology and resume the workflow with the newly found information. In general, it is possible to execute the workflow right from the beginning any time we want because tasks are performed only for missing information and will be skipped on a second execution if the information is already there (e.g. OWL properties have been stated etc.). So for example when a business object is "edited" with some random properties changing, the workflow can be replayed from the beginning instead of trying to figure out what decision to backtrack and what to keep etc.
Boris

Wednesday, August 10, 2011

Externalizing Data Property Values

Externalize data property values into it's own table. It will improve performance by two means. The first, we will avoid duplicates literal values reducing the amount of records. The second is by having typed representations of the data i.e., VALUE_AS_DATE, VALUE_AS_NUMBER we can query on by type without having to do an explicit cast. The initial table should look like so:

create table CIRM_OWL_DATA_VALUE
(
ID number(19,0) not null, -- the sequence, the VALUE column in CIRM_OWL_DATA_PROPERTY will now be a fk constraint on this column.
VALUE_HASH varchar2(28) , -- lookup column.
VALUE_AS_VARCHAR varchar2(4000 char), --a string representation of the value
VALUE_AS_CLOB clob , -- storage column for large string values
VALUE_AS_DATE timestamp, -- a typed representation of the value as date
VALUE_AS_DOUBLE double precision, -- a typed representation of the value as double
VALUE_AS_INTEGER number(19,0), -- a typed representation of the value as an integer.
primary key (ID)
)


The RelationalStore class will need to be modified accordingly to reflect the changes in schema. We would need to check if an existing VALUE exists, this can be done by first checking the VALUE_HASH column. Use of the java hashCode() is not recommended as table records can get quite large and hash collision is a concern. Perhaps a better approach would be to utilize a cryptographic hash function ie, MD5, SHA-1 to hash the string value and store it's Base64 encoding in the column. SHA-1 hashing should suffice. A base64 encode of a SHA-1 digest would result in a 28 byte hash hence the 28 byte column length.

Also, the VALUE_HASH column could be included in the DATA_PROPERTY TABLE along with the id for rapid equivalence queries (avoid a join).

Tuesday, August 9, 2011

Querying the Operations Database

Triple store query translate results in a query that mirrors what is described in this document:

http://www.w3.org/2002/05/24-RDF-SQL/

The noMappingTranslate works with equivalence based queries. Here is an example of a query with a ‘nested’ type:

{

"sortBy":"hasDateLastModified",

"hasServiceRequestStatus":"*",

"atAddress":{

"Street_Name":"1ST",

"type":"Street_Address"

},

"currentPage":1,

"sortDirection":"desc",

"type":"Garbage_Missed_Complaint",

"hasDateCreated":"06/14/2011",

"itemsPerPage":20

}

The query translator translates this to:

SELECT CIRM_CLASSIFICATION.SUBJECT

FROM CIRM_CLASSIFICATION

JOIN CIRM_OWL_OBJECT_PROPERTY ON CIRM_OWL_OBJECT_PROPERTY.SUBJECT = CIRM_CLASSIFICATION.SUBJECT

JOIN CIRM_OWL_DATA_PROPERTY ON CIRM_OWL_DATA_PROPERTY.SUBJECT = CIRM_CLASSIFICATION.SUBJECT

WHERE (CIRM_CLASSIFICATION.OWLCLASS = ?)

OR (CIRM_OWL_OBJECT_PROPERTY.PREDICATE = ?)

OR (CIRM_OWL_OBJECT_PROPERTY.PREDICATE = ? AND CIRM_OWL_OBJECT_PROPERTY.OBJECT IN ( SELECT CIRM_CLASSIFICATION.SUBJECT

FROM CIRM_CLASSIFICATION

JOIN CIRM_OWL_DATA_PROPERTY ON CIRM_OWL_DATA_PROPERTY.SUBJECT = CIRM_CLASSIFICATION.SUBJECT

WHERE (CIRM_CLASSIFICATION.OWLCLASS = ?)

OR (CIRM_OWL_DATA_PROPERTY.PREDICATE = ? AND TO_CHAR(CIRM_OWL_DATA_PROPERTY.VALUE) = '1ST') ) )

OR (CIRM_OWL_DATA_PROPERTY.PREDICATE = ? AND TO_CHAR(CIRM_OWL_DATA_PROPERTY.VALUE) = '06/14/2011')

Parameters:

1 = 181()

2 = 191()

3 = 194()

4 = 182()

5 = 184()

6 = 185()

Here is a list of functions and operators that the query translator supports:

Function/Operator Translation Sample JSON property expression

greaterThan SQL GREATER THAN "hasDateCreated":"greaterThan(\"2011-06-15T19:18:06.552Z\")"

lessThan SQL LESS THAN "hasDateCreated":"lessThan(\"2011-06-15T19:18:06.552Z\")"

like SQL LIKE With '%' post append "hasName": "like(\"Zues\")"

between SQL BETWEEN "hasDateCreated":"between(\"2011-06-15T19:18:06.552Z\",\"2011-06-15T19:18:06.552Z\")"

*contains

*in

*startsWith

*notLike

= SQL EQUALS "hasName": "= \"Zues\""

>= SQL GREATER THAN OR EQUAL "hasCount": ">= 1"

<= SQL LESS THAN OR EQUAL "hasCount": "<= 1"

> SQL GREATER THAN "hasCount": "> 1"

< SQL GREATER THAN "hasCount": "< 1"

Note: Literal Values are translated to an equals operation on the SQL side.

* Incomplete. Expression parsing is there but translation is incomplete.

Sunday, June 5, 2011

JSON Library

NOTE: The Library described here has grown and moved to Github with official website http://bolerio.github.io/mjson/. It still remains faithful to the original goal, it's still a single Java source file, but the API has been polished and support for JSON Schema validation was included. And documentation is much more extensive.

JSON (JavaScript Object Notation) is a lightweight data-interchange format. You knew that already. If not, continue reading on http://www.json.org.
It's supposed to be about simplicity and clarity. Something minimal, intuitive, direct. Yet, I couldn't find a Java library to work with it in this way. The GSON project is pretty solid and comprehensive, but while working with REST services and coding some JavaScript with JSON in between, I got frustrated of having to be so verbose on the server-side while on the client-side manipulating those JSON structures is so easy. Yes, JSON is naturally embedded in JavaScript, so syntactically it could never be as easy in a Java context, but it just didn't make sense all that strong typing of every JSON element when the structures are dynamic and untyped to being with. It seemed like suffering the verbosity of strong typing without getting any of the benefits. Especially since we don't map JSON to Java or anything of the sort. Our use of JSON is pure and simple: structured data that both client and server can work with.
After a lot of hesitation and looking over all Java/JSON I could find (well, mostly I examined all the libraries listed on json.org), I wrote yet another Java JSON library. Because it's rather independent from the rest of the project, I separated it. And because it has a chance of meeting other programmers' tastes, I decided to publish it. First, here are the links:
Documentation: http://www.sharegov.org/mjson/doc
Download: http://www.sharegov.org/mjson/mjson.jar
Source code: http://www.sharegov.org/mjson/Json.java
The library is called mjson for "minimal JSON". The source code is a single Java file (also included in the jar). Some of it was ripped off from other projects and credit and licensing notices are included in the appropriate places. The license is Apache 2.0.
The goal of this library is to offer a simple API to work with JSON structures, directly, minimizing the burdens of Java's static typing and minimizing the programmer's typing (pun intended).
To do that, we emulate dynamic typing by unifying all the different JSON entities into a single type called Json. Different kinds of Json entities (primitives, arrays, objects, null) are implemented as sub-classes (privately nested) of Json, but they all share the exact same set of declared operations and to the outside world, there's only one type. Most mutating operations return this which allows for a method chaining. Constructing the correct concrete entities is done by factory methods, one of them called make which is a "do it all" constructor that takes any Java object and converts it into a Json. Warning: only primitives, arrays, collections and maps are supported here. As I said, we are dealing with pure JSON, we are not handling Java bean mappings and the likes. Such functionality could be added, of course, but....given enough demand.
As a result of this strategy, coding involves no type casts, much fewer intermediary variables, much simpler navigation through a JSON structure, no new operator every time you want to add an element to a structure, no dealing with a multitude of concrete types. Overall, it makes life easier in the current era of JSON-based REST services, when implemented in Java that is.
In a sense, we are flipping the argument from the blog Dynamic Languages Are Static Languages and making use of the universal type idea in a static language. Java already has a universal type called Object, but it doesn't have many useful operations. Because the number of possible JSON concrete types is small and well-defined, taking the union of all their interfaces works well here. Whenever an operation doesn't make sense, it will throw an UnsupportedOperationException. But this is fine. We are dynamic, we can guarantee we are calling the right operation for the right concrete type. Otherwise, the tests would fail!
Here's a quick example:
import mjson.Json;

Json x = Json.object().set("name", "mjson")
                      .set("version", "1.0")
                      .set("cost", 0.0)
                      .set("alias", Json.array("json", "minimal json"));
x.at("name").asString(); // return mjson as a Java String
x.at("alias").at(1); // returns "minimal json" as a Json instance
x.at("alias").up().at("cost").asDouble(); // returns 0.0

String s = x.toString(); // get string representation

x.equals(Json.read(s)); // parse back and compare => true
For more, read the documentation at the link above. No point in repeating it here.
This is version 1.0 and suggestions for further enhancements are welcome. Besides some simple nice-to-haves, such as pretty printing or the ability to stream to an OutputStream, Java bean mappings might turn out to be a necessity for some use cases. Also, jQuery style selectors and a richer set of manipulation operations. Closures in JDK 7 would certainly open interesting API possibilities. For now, we are keeping it simple. The main use case is if you don't have a Java object model for the structured data you want to work with, you don't want such a model, or you don't want it to be mapped exactly and faithfully as a JSON structure.
Cheers,
Boris