Tim Bray just posted a blog post on how information wants to be free and how that's not actually a problem, economically speaking.
Two choice quotes: "the distinction between bits-as-bits and bits-as-a-service may not always be obvious. But it’s crucial, because people will pay for only one of the two." and "» I don’t sell information ... I sell services. «"
This is the kind of thing that Web services ultimately should be about.
I was looking for a way to put my Mac Stickies on my Android phone, as as the first solution, I just save the text of all the stickies into a bunch of text files that I can then copy to the phone.
To save the stickies on OSX Lion, I found an AppleScript script that does that, except it has problems with double quotes in the stickies, so I tweaked it, see below.
One remaining limitation is that it can't handle spaces, so I run the script manually on every space that has some stickies on it - the script creates new files for all the stickies. This also means that if you already saved some stickies and you rerun the script on the same screen, you'll get duplicates. If you know how to fix the access to stickies on all spaces, please let me know.
Here's the script:
set theName to "" set i to 0 set n to {} set L to {} set destFldr to "" set mydestFldr to "" if destFldr = "" then set destFldr to (choose folder with prompt "Choose a destination folder:") as text set mydestFldr to POSIX path of destFldr end if tell application "Stickies" activate tell application "System Events" tell application process "Stickies" set L to name of windows try repeat with awindow in L set m to value of text area 1 of scroll area 1 of window awindow set end of n to m end repeat end try repeat with acontent in n repeat set i to i + 1 set theName to mydestFldr & "stickies" & "_" & (i as string) & ".txt" set existsFlag to "" tell application "Finder" to if exists theName as POSIX file then set existsFlag to "yes" if (existsFlag = "") then exit repeat end repeat try set theFileReference to open for access theName with write permission write acontent to theFileReference close access theFileReference end try end repeat end tell end tell tell application "Finder" activate open destFldr end tell end tell
The W3C has acknowledged WSMO-Lite, a lightweight set of terms for describing the semantics of Web services that builds on the standard SAWSDL. According to the W3C's own Team comment, WSMO-Lite "is a useful addition to SAWSDL for annotations of existing services and the combination of both techniques can certainly be applied to a large number of semantic Web services use cases."
So now, if you were interested in what SAWSDL could be useful for, here's an answer. We are using WSMO-Lite for semantic Web services automation in the project SOA4All, and especially in the SWS registry iServe.
We also apply WSMO-Lite to RESTful Web services - through the microformat hRESTS we structure the HTML documentation that every RESTful API has, and then it's easy to add SAWSDL/WSMO-Lite annotations.
So that's what's been keeping me busy.
Over a year and a half after Axel first told me about this idea, and over a year since it was presented at ESWC 2008, XSPARQL has reached the next step: it was now acknowledged by the W3C, the Web's standardization body.
XSPARQL is a fusion of SPARQL and XQuery, a query/transformation language able to process RDF and XML data sources and return RDF or XML. It's great for transforming data from XML to RDF or vice versa, and more. Finally the worlds of XML and RDF might be getting closer, yay!
Now go check out the online demo. 8-)
David Booth of HP has an article online called RDF and SOA. Summary quoted (emphasis mine):
The following seem to be key principles for leveraging RDF-enabled services in an SOA.
- Define interface contracts as though message content is RDF
- Permit custom XML/other serializations as needed
- Provide machine-processable mappings to RDF
- Treat the RDF version as authoritative
- Each data producer supplies a validator for data it creates
- Each data consumer supplies a validator for data it expects
- Choose RDF granularity that makes sense
Apart from suggesting that RDF can be a good internal view on the data exchanged by Web services, with benefits especially in versioning, David suggests that validation has two faces - the producer should say how to validate that the data makes sense, and the consumer should say how to validate that the data is fit for the use by this particular consumer.
Further, David wonders about the mapping between XML and RDF - XSLT seems good enough for lifting from XML to RDF, and SPARQL seems to be a good start for transforming from RDF to XML. I can heartily suggest XSPARQL, a fusion of XQuery and SPARQL, for both mapping directions, but especially for lowering. (I'm a minor coauthor of XSPARQL.)
A part of REST is the "client-stateless-server" part, abbreviated as "stateless". RESTful interactions are stateless. But that does not mean the resources are stateless (as said in what looks to be an otherwise nice presentation by Dan Diephouse, via Stefan Tilkov). On the contrary, resources are an embodiment of state. They have state that can be manipulated. They should not do per-client sessions, that's what stateless means. There should be no state but resource state between two client requests.
Repeat after me: Resources should be sessionless!
I've heard the call for us semantic technology researchers "to eat our own dog food" one too many times. Aside from the obvious problem with it (dog food? anyone?), I think those who call for us using our own technologies are often going a step too far. Read the rest of this rant for why.
People seem to be calling for "eating our own dog food" as if we were obliged to do so. We develop new technologies, we use them in the projects where they apply. Should our Web site be semantic? Should our internal management workflows be semantic and automated? We do semantic automation, after all. I've seen many who I suspect would naturally say yes.
I'd say "only when it makes sense, dudes!" My technologies (I work on Semantic Web Services, for those who haven't paid attention) are not easily applicable on our Web site or in our daily activities as a research institute; one could say that our daily operations are not in the scope for my technologies. Kinda similar to how I can't really fix my parents' computer.
People should "eat their own dog food" where it is the right tool for the job at hand; they may even want to jump through some hoops in order to showcase their technologies where it makes sense, even though it could, in the particular cases, be done cheaper and faster with pre-existing stuff (Perl?); but people should not be pushed to jump through those hoops.
Maybe those who call for others "to eat their own dog food" should just sit behind their keyboards, code it up themselves, and then ask "why do I have to show you that your stuff applies?" Or think twice before making it sound like others are dumb and incompetent.
BTW, are scientists and researchers in other fields (physics, biology, you name it) expected to eat their own dog food as much as computer scientists are?
It will mostly go unnoticed, but last week (28 Aug 2007), the specification for Semantic Annotations for WSDL and XML Schema (SAWSDL) was published as a W3C Recommendation, as much of a Web standard as a standardization process can give you. I see SAWSDL as a stepping stone towards Web-friendly (and SemWeb-friendly) semantic web services.
A very short overview, adopted from the WG page (and slightly edited):
The SAWSDL Recommendation defines mechanisms using which semantic annotations can be added to WSDL components. SAWSDL does not specify a language for representing the semantic models; instead, it provides mechanisms by which concepts from the semantic models can be referenced from WSDL components as annotations. The semantics can help disambiguate the description of Web services during automatic discovery and composition of the Web services.As its main contribution, SAWSDL defines the following three new extensibility attributes for WSDL 2.0 elements to enable semantic annotation:
- an extension attribute named modelReference specifies the association between a WSDL component and a concept in some semantic model,
- two extension attributes, named liftingSchemaMapping and loweringSchemaMapping, that are added to XML Schema for specifying mappings between semantic data and XML, to be used during service invocation or mediation.
The spec itself is very simple, but its implications are important. Previous Semantic Web Services (SWS) research has always started from a big semantic model (even framework) and tried to do everything, hiding WSDL in the "invocation details you don't wanna know" parts called grounding. With SAWSDL, WSDL becomes the central model and the frameworks can be broken down into small pieces that are then attached to it.
While SAWSDL may not change the end functionality of SWS, it makes it much more comprehensible to WSDL people learning about semantics. Instead of "here's our model, learn it!", the SWS people will now approach the WSDL people with "here are the bits that might help you in various ways in these new clever tools, and your normal tooling will just ignore them so there's no harm."
With a few colleagues, we're working on WSMO-Lite precisely along these lines, the only thing that we now need from W3C is a rule language (cf. RIF) to be able to do everything that WSMO does using no invented languages, just a few simple ontologies.
Anyway, this all is a big deal for me because I chaired the working group; it was a year of a lot of work, plenty of it very pleasant; I learned a lot and got to know some very cool people. And it's also a big deal for me because it's supposed to serve as the bridge between my old community (Web Services; I grew up in Systinet) and the new one, SWS research.
I was giving this lightning talk (3min) at a W3C meeting last week, titled like this entry. Below is the text, accompanied by these pictures.
(s1)
Hi ladies and gentlemen, my name is Jacek Kopecky and I'll start with a few pictures, if you excuse their amateur quality, and then I'll have a few questions.
(s2) we invented computers to work for us (s3) and they do, but we need to do a lot of programming (s4) and other setup before we can sit back, (s5) relax, sip coffee and wait for the results.
computers can also (s6) make other computers work, we know this from distributed systems. But what if I could only (s7) tell the computer what I want and it would do what I mean?
Semantic web services (s8) attempt a step in that direction. Combining semantic web with web services, trying to enrich both toward automation.
(s9) The semantic web needs services to fulfill its promise. Even the famous article in the the scientific american talks about how the computer will make use of automated services.
(s10) The Web itself needs services, and we're already getting them, as part of what's called web 2.0.
(s11) Web services need semantics, because enterprises have problems handling thousands of web services while visionaries talk about billions of them.
Semantic web services promise some automation: (s12) given a user goal, the computer can find suitable web services, put them together, rank them according to my criteria, negotiate with them, and even invoke them.
But most members here don't seem to want to touch semantic web services, not even with a ten-foot stick. I'm here to ask why.
Part of it is that there is (s13) too much promise and hype and (s14) too few success stories, and we, the semantic web services researchers, are to blame, but I have here a wider community so I'll try to shift a bit of the blame on you. 8-)
Are Semantic web services unpopular because (s15) the web community, the semantic web community and the web services community don't talk to each other?
could semantic web services researchers reach out better (s16) to the semantic web ppl? or to the web ppl? or to the web services ppl?
we have all these sorts of people and then some in this room, so please tell me what it is that we don't deliver to you, which if we did, would make you want to help us.
Maybe we can, in the end, (s17) achieve some automation. Thank you very much.
Sanjiva writes about how he was corrected that Google maps are RESTful, i.e. they use URIs for each map segment. He gives an example of http://kh3.google.com/kh?n=404&v=14&t=tqstqrtqqttqqsqqrsrr, calls it "lovely" and asks whether there's any meaning to it.
I fairly quickly guessed that "qrst" are the quadrants of a rectangle. You can check that by adding 'q', 'r', 's', 't' to http://kh3.google.com/kh?n=404&v=14&t=t . And it works recursively, as long as Google has a useful resolution.
Does it make the URI more RESTful, if it now makes sense? In general, what makes a URI restful? Why is http://jacek.cz/blog/ more restful than the map URI above? Just because it's more readable? How about http://www.dalnice.com/d/d01/d01.htm - makes sense to me, but it's Czech-specific.
AFAIK, REST doesn't say how to create URIs, it just says that they can be used for linking. HTML gives the client a way to create URIs with a GET form, but also with Javascript, which can form the map URIs very easily, including the URI for the segment next to the one at hand etc.
I probably just don't understand what Sanjiva understands under RESTful URIs.
I had two presentations at the WWW2006 conference in Edinburgh. Both were in the W3C track and here are the slides: WSDL 2.0 and Semantic Annotations for WSDL (comments welcome). The session was chaired by Hugo and also contained two very informative presentations by Paul. Very good to see them both again! 8-)
I may probably be accused of missing several points here, but I have a different take on the WS-Addressing discussion, which I became aware of through Stefan Tilkov: WS-A and the Web.
Elliotte compares Endpoint References (EPRs) to URIs and argues that EPRs do nothing but add complexity. The rest of the debate (Stefan, Steve Vinoski, Ted Neward) focuses for some reason on how nobody (or plenty of people) uses other protocols than HTTP. I don't know why it turned this way.
WS-Addressing defines message information headers, which nobody in this particular debate seems to object to (and they look pretty valuable to me), and the endpoint references, the source of contention.
EPRs use URIs as the base of the references. But EPRs are not just addresses, they're reference that do more than just address Web services (or resources). They can carry metadata (but that's trivial without EPRs as well) and they can carry so called parameters. These parameters make EPRs interesting - basically an EPR with parameters is like URI with cookies.
I believe most Web Architecture people will agree nowadays that there is value to cookies (if they aren't overused/abused), and one grudge with cookies is that they aren't bookmarkable, that I can't give a link to a buddy if part of what I want to give them is managed with cookies.
Now one could argue that when this is a useful scenario, cookies are overused. Except that that wouldn't be totally true: let's have an example website where cookies serve to keep track of a user for personalization (I hope this is not a contentious application of cookies). I can't easily give somebody a link to see the page as I see it (personalized for me), yet sometimes I might want to do that.
EPRs allow me to package the cookies and the URI and send it to a buddy Web service so that it can access what I access. Maybe the whole contention is that people already know that EPR parameters will be overused/abused and don't want to give them a chance? Not everything belongs in the URI, after all.
I guess it's an ancient discussion about which style of mentioning references in text ([1] or [Doe, 1998]) is preferable.
I grew up with the shorter, numeric style, and I prefer it because it improves readability for me. When I objected to the more verbose style some time ago, a good argument was presented to me, though — after some time in the field, one doesn't have to look up the exact reference at the end of the paper, because they will know what important thing John Doe wrote in 1998. But I'm now reviewing a paper that refers to a couple of Web Services specifications as [Don, 2003] and [Keith, 2004]. I think the authors come from a place where the usual order or first name, last name doesn't hold, but it's still so funny. Feels like we researchers in Web Services can just use the first names, because we are such a friendly community. 8-)
BTW, I believe I've briefly met both Don Box and Keith Ballinger, and I found them friendly enough. 8-)
Jim Webber has a piece where he posits that W3C's beaurocracy is actively working against Web services.
Quoting from his post:
I was amazed at the bureaucracy it takes just to handle a one-way transfer of a SOAP message. It involves at least 3 committees [...] and requires the instantiation of a new WSDL MEP!
Just as with SSDL, we the people are free to extend SOAP and WSDL for one-way transfer. In a short document I could show by example how something like that would work, and it may be implemented interoperably and all would be good. But more likely than not there would be misinterpretations and general opposition to implementing Jacek's thing.
On the conceptual level, the short document would be equal to creating a new SOAP MEP, extending the SOAP HTTP binding (or creating a new one) to support that MEP, and extending/creating a WSDL binding that would use that SOAP-level stuff. The short document would probably skip a lot of formalities which could hinder reuse or clarity of the spec.
Now W3C cannot produce such a short document because their main goals include reuse and total unambiguity of the specs. And it has multiple committees focused on multiple (mostly) orthogonal specifications, because one big W3C Web Specifications Working Group would clearly not work without spinning out a number of focused task forces, probably very similar to the WGs and TFs we have now.
W3C works the same way for all its specifications, not just for Web services.
Savas replied to my comments on SSDL. Thinking about my reaction, I thought about something else, though:
How exactly is SSDL better than WSDL?
SSDL is SOAP-only, but in WSDL you can easily just standardize with your partners on a single binding (because bindings in WSDL 2 can be reused across interfaces).
SSDL models headers, but I haven't seen yet an example of where a concrete header is better in a contract than a (possibly abstract) feature.
SSDL doesn't imply any semantics, expecting the functionality that a service (or an interaction) does to be defined by the specifications of the contracts
- but SSDL is a specification of the contract, right? So the semantics would be in English in documentation within the SSDL file, right? Same with WSDL, except WSDL's interface (or operation) is the natural place where such documentation should be, whereas in SSDL it's (intentionally) not clear.
Instead of using SSDL, could we easily profile WSDL? Maybe such a profile would be of interest to the WS Description Working Group 8-)
I have just seen SSDL (via Jim Webber) and I have mixed feelings.
The good thing I see on SSDL is that they are tightly focused to SOAP messages, which makes sense in those environments, where Web services have little Web in them.
The biggest problem I have with the SSDL proposal is that I don't see the usage model. One of the important use cases for describing services is discovery. For that I'd like a service to point me to its contract, not a contract to enumerate all the services it applies to. I guess it's implied by the packaging that the protocols defined in a contract apply to all the endpoints contained in that contract, but this looks the wrong way around.
Behind WSDL's interface and operation constructs there's the implied promise that the interface (or operation) does something. A protocol in SSDL doesn't give such promise, it just dryly specifies the message constraints. In my view, WSDL can be used for discovery of functionality (an agreed interface, for example) but not so SSDL. During discovery, I'm looking for functionality, not a protocol.
I have to admit that I have met opposition (even in the WSDL working group) to ascribing functionality (semantics) to interfaces or operations, but I still can't see how anybody can interpret the terms interface and operation as just specifying the message schemas and sequencing.
I also have a number of smaller problems with SSDL:
This is a little pet peeve of mine. I was pointed through Stefan to a presentation on Web Services (originally from April 2004, apparently) and once again as one alternative to HTTP for moving SOAP messages around the author mentioned SMTP. While I admit SMTP is a major protocol involved in email exchange, what people want for SOAP is email, not SMTP. I expect people want to use the existing infrastructure and receive their email (SOAP messages) via IMAP or POP and, expectedly, send their email using their OS capabilities which may be real SMTP, but also invoking sendmail.
I was among the authors of the SOAP Email Binding and I fought successfully against making this particular document an SMTP binding. The document in fact only mentions actual protocols in this paragraph:
It is not the responsibility of this SOAP binding to mandate a specific email infrastructure, therefore specific email infrastructure protocol commands (such as SMTP, POP3, etc) are not covered in this binding document. The underlying email infrastructure and the associated commands of specific email clients and servers along the message path are outside the scope of this email binding.
Repeat after me - email is a viable alternative to HTTP for delivering SOAP messages.
Recently, Rich Salz has published an article at xml.com wherein he criticizes WSDL 2 heavily. I used to be active in the WS-Description working group (nowadays I'm mostly lurking there with little time to contribute meaningfully) so here's my opinion on the criticisms.
First a conclusion: while Rich has some good points (and I haven't read his comments formally submitted to the WG), his big complaints (listed below) are at least debatable. That doesn't mean the WSDL specs can't be blamed, maybe they can include some rationales and other explaining texts that would give the readers more understanding. I'm sure Rich's comments will lead to improvement of WSDL 2.
In the first section, "WSDL 2: Just Say No", Rich wants TimBL to come to our face-to-face and spank us all, plus return the documents to us. Rich doesn't have the W3C process right: in Last Call, the documents are publicly reviewed, the comments are handled by the WG, the documents are updated appropriately and only then can they be submitted for the next step, Candidate Recommendation, to start formally gathering implementation experience. Rich has submitted his comments to the WG and the WG is adding them to their issues list. Any such comments that are not resolved to the commenter's satisfaction will later be brought to TimBL's attention so there should be no worry.
In the second section, "No Standard WSDL File", Rich says the spec doesn't contain a valid example WSDL file. The WG decided that the spec itself will not contain such examples, but that a primer will be produced where examples will be abundant. For the developer of a WSDL tool, the spec itself is the guide, as examples may be misleading in their simplicity as to what the actual rules are. The users, on the other hand, need not read the spec, just the primer should suffice. And Rich says there is no normative XML Schema for WSDL - he must have missed table 1-1 which links to it: http://www.w3.org/2004/08/wsdl. The schema also contains a good definition for the "element" attribute that Rich explicitly points out.
The third section, "Yet Another Data Model", complains about the component model of WSDL. This approach to modeling WSDL was chosen because it's (according at least to my limited experience) the easiest way to have a formal model of a language without depending overly on the syntax. Even XML is this way, the infoset is the formal model (even though it came later than the XML 1.0 spec) and XML 1.0 specifies how that is serialized. Infoset doesn't care about single or double quotes around attributes, WSDL component model doesn't care about ordering of XML elements, or even about imports, for that matter.
In the fourth section, "COM Comes Back", Rich mentions that two of the editors raised a formal Last Call objection against the spec. What he misses is that this objection has been here from the beginning, that this issue came down to a vote (after lengthy consensus-seeking) and they were voted down. I'd say "that's life", and the objection will still be considered before the specs become Recommendations. Also, the only mention of COM (implying bad, obsolete technology) is comparing interface inheritance with vtable extension, with no rationale why this would be considered bad.
As for comparing components, I expect a proper tool will seldom really have to compare their properties because the situation will not be common. Let me illustrate on the diamond inheritance scenario: interfaces B and C extend interface A, interface D extends interfaces B and C. A parser will logically get all components from interface A twice, but it can easily remember their source and assume that parsing a single interface twice will, in fact, yield equivalent components.
As for the uniqueness good practice for names inside a component, I expect Rich means the recommendation that multiple interfaces in a single namespace don't define different operations with the same names, which would make it impossible for one other interface to extend both such conflicting interfaces. Yes, if two interfaces would naturally have different operations with the same name, the developer can choose to give them different namespaces, because I strongly believe that namespace URIs are cheap.
I'm currently in Belmont, California in the W3C Workshop on Constraints and Capabilities for Web Services. We're basically talking about policies (not just WS-Policy, but these folks are here as well). Read on for my thoughts on this and for links to other participants.
I hope the outcome is a future W3C working group on Web Services Policy language that would be chartered to come up with a framework for capturing non-functional capabilities, constraints and preferences (cc&p's) for web resources and web services, including semantic web resources and semantic web services. I mention SWS explicitly because this policy language might in fact become a significant part of semantic web services description language.
I would discourage the use of the policy language for expressing the functional cc&p's just for practical purposes (to keep the purpose simple and clear), including choreographical constraints and the constraints on the content of the message bodies.
We (DERI) have an accepted position paper here, where you can read in section 3 what we see as non-functional capabilities, constraints and preferences.
Finally, I'm very glad for this opportunity to see again many cool folks, including co-chair Mark Nottingham, alphabetically Glen Daniels, Paul Downey, Dave Orchard, and Jeffrey Schlimmer, mentioning only those whose blogs I can readily link to, and many others, especially the W3C folks. If you feel left out, send me a link to your blog. 8-)
Through Chris Ferris I got to know about the WS-Addressing Submission to W3C. I really wonder where this is going to be taken, especially with regard to the Web Architecture. So far, SOAP 1.2 can play well with the Web, WSDL 2 is on its way to be able to describe web resources well enough (hopefully), but I don't really see strong use of Addressing in Web-friendly applications at the moment. It's surely going to be interesting.
I had a presentation on URIs yesterday, linked here in case any of my readers are wondering about how much can be said about URIs without even going into much detail. Any questions or suggestions are welcome, it seems I could be reusing this material in a Uni course on Web Engineering soon.
Jim Webber mobilizes against SOAP action, saying we really need to get this last vestige of RPC-ness struck off.
IMO it's Jim's problem that he only sees the action parameter of the media type to be useful for RPC. Just like media types are a means of dispatch for the processing of internet documents, media type parameters can be useful for subdispatch. I think we're really missing a namespace parameter of the application/xml media type. I was there (in the XMLP WG) when this was discussed and we had strong voices for keeping SOAP action, so we moved it to the media type parameter, where it makes most sense.
A short work update: a major part of my work here is now around Triple-based computing, started with a vision paper by my boss, Dieter Fensel. It hints at the direction in which I think Semantic Web should go and upon which SemWeb Services should be built.
We had a research seminar this weekend in the beautiful university resort in Obergurgl, 2000m high, with 3000m mountains around it. We couldn't enjoy it properly because of the all-day-long presentation programme so I'll have to go there privately one day. Of course, I managed to take some pictures, too.
I had a presentation there on the background of core Web Services standards, partly as a way of introducing myself to my new peers. It's sometimes simplified for the benefit of the intended audience, so there's no need to dissect it and flame me for every detail. 8-)
Apparently the current discussion in the WS area is about the new SOAP Resource Representation header. I was also a member of the XMLP group (until I left Systinet) and I helped push for this new spec.
I agree with MNot's comments, this is nothing meant to binarize XML. XOP/MTOM can be used when a piece of binary data is a logical part of the data of the message for any reason (some applications just naturally use octet[] structures), but also when, for optimization reasons, something from the web is stapled to the message. The Representation header just makes the second usecase clear and explicit.
I'm lucky to have joined DERI just days before its off-site meeting in Crete (Greece). I have some pictures and a short summary of my impression: Nice!
I think such a meeting is a very good way to meet and get to know all my new co-workers (I don't even try to remember all the names, though), and especially those from Galway (DERI Ireland) whom I'm going to see over teleconferences.
I guess not everyone knows this yet so here it goes. I'm leaving Systinet in two weeks and starting at DERI Innsbruck to work on Semantic Web projects and to try to get my PhD on the way to my dream job of a university professor.
I've been at Systinet (previously known as Idoox, for those who remember) for almost four years and I've never regretted coming here; it's a great company with lots of opportunities. Now the time has come to move on, but I will miss Systinet greatly.
The WS-Description WG at W3C discovered recently that it may be unclear what WSDL bindings are meant for and what the boundary between bindings and interfaces is. I wrote up my take, if you're interested see my email message.
My summary is: the boundary is in the application - information important
for the application goes into interfaces, implementation details go into
bindings.
There's been a lot of talk on the difference between Web Services and Distributed Objects. Just to chime in, my opinion is that Web Services are the communication infrastructure on which Distributed Objects systems can be built. Efforts like WS Resource Framework go in that direction, I think.
Stefan writes Even more on WSDL vs. IDL (apparently as a follow-up to Jim and Mark).
According to Jim, The killer differentiator is that for a given WSDL portType (or soon "interface" in WSDL 2.0) there is no implication that the portType is "implemented" by a specific class at the back-end.
Why precisely does he think there is such an implication for IDL?
In my opinion, IDL (in cooperation with the actual wire format specs like IIOP and COM/DCOM) only defines that something will receive messages of some form and produce others.
In a pseudo protocol, a message
"invoke operation add
with the parameters int
3 and int
5"
is guaranteed to be replied to with a message of the form
"result of operation add
is int
x"
I don't think IDL specifies that there will be an actual object with the method named add
that will be invoked because what exactly is an object? I can code such a beast in machine language and it will still work even though my machine language has no notion of objects.
If there is any difference between IDL and WSDL, it's that WSDL is more explicit about not depending on what the implementation looks like, and Web Services have much lower entry barrier than, say, CORBA or COM.
I'm now reviewing the newly updated WS-Addressing specification. I have a few rants, as expected. 8-)
First a general observation: the intent seems to be that all WS-Addressing-compliant endpoints use and require message information headers (the wsa:To, wsa:MessageID, wsa:Action) in all messages. IMHO this raises the entry barrier considerably, especially by tying messages to WSDL descriptions (via how the wsa:Action value is established).
Now for the more specific comments: