One of the web services I inherited in my current job contains a WSDL schema with the following namespace definition:
<xs:schema
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:mns="http://my.example.com/sumproj/msgs.xsd"
xmlns:tns="http://my.example.com/sumproj"
xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
targetNamespace="http://my.example.com/sumproj/msgs.xsd"
elementFormDefault="qualified">
All of the external URLs listed above can be accessed through a web browser, but the internal ones (my.example.com domain) only produce a "Not Found" error:
Not Found
The requested URL /sumproj/msgs.xsd was not found on this server.
Since the person to ask about why certain things are implemented the way they are has left, I am wondering whether this is a bug or a feature:
Has the authoring developer intended to place the namespace XSD on that http://my.example.com server and simply didn't get to implementing it?
Or is it some type of security feature in which the URL is only accessible by the web service through a config XML, effectively defining this URL as virtual when it actually access localhost or similar?
If the latter, how does this work and where I can learn more about it?
(Note: example.com above is only used for illustration, I can't really disclose my employer's internal URL)
Has the authoring developer intended to place the namespace XSD on that http://my.example.com server and simply didn't get to implementing it?
That's possible, but not certain; without reading the developer's mind no firm answer is possible. A rational developer might do it either way -- even if many people including me would say it's better practice and more rational to put namespace documentation (possibly in the form of a schema document, possibly in the form of a RDDL document, possibly in another form) at the namespace URI.
Or is it some type of security feature in which the URL is only accessible by the web service through a config XML, effectively defining this URL as virtual when it actually access localhost or similar?
Not impossible, but this is not a design technique that seems to be widely documented. (At least, there's at least one XML geek who's never heard it suggested.) So I would guess the answer is: probably not the original author's intent.
Can you point me to an authoritative source where I can read about this?
The authoritative source for XML namespaces are the documents "Namespaces in XML 1.0 (Third Edition)" and "Namespaces in XML 1.1 (Second Edition)", edited by Tim Bray et al. and published by the World Wide Web Consortium in 2009 and 2006, respectively.
It's difficult to prove a negative concisely, but examination of the spec should make clear that it imposes no requirement that URIs used as namespace names should be dereferenceable. Section 2.1 says "An XML namespace is identified by a URI reference [RFC3986]" but nothing in the spec requires that namespace names be dereferenced. Section 2.3 defines the process of namespace-name matching as a test for string identity and points out that this means that equivalent forms of the same URI (e.g. http://www.example.org/wine and http://www.Example.org/wine) do not count as equal for purpose of namespace-name matching. This would make no sense at all if namespace processors were required to dereference namespace names.
The W3C Technical Architecture Group (TAG) discusses this complex of questions from a higher-level point of view in it document Architecture of the World Wide Web, Volume One. See in particular
section 3.5 (which enunciates the principle that "Reference does not imply dereference": "An application developer or specification author SHOULD NOT require networked retrieval of representations each time they are referenced."
section 4.5.3 on XML namespaces
section 4.5.4 on namespace documents (which the TAG recommends)
See also this related question (and my answer to it).
if a name space URL does point to a resource, does this necessarily imply that the WSDL schema must consult with it at runtime?
No. WSDL can impose extra rules on the use of namespace names in WSDL documents, but the document fragment you quote is from an XSD schema document. The XSD spec does not require that namespace names be dereferenced (although it does suggest it as one possible strategy for locating XSD schema documents for use in a validation episode).
If the answer is no, can it consult with it at runtime? or is it only for human reading?
It is not forbidden for processors to dereference namespace name, but some authorities discourage it. When widely deployed software dereferences namespace names every time it runs, or every time it reads an XML document, the result can be excessive and unnecessary network traffic. W3C has been suffering under such unnecessary traffic for some years and now serves all schemas at artificially slow rates in order to persuade users to complain to their software vendors. (See Ted Guild's blog post of 2008 for details.)
XML namespaces are URIs (Uniform Resource Identifiers) which are intended to distinguish the author's meaning of the element 'book', say, from the meaning of the element 'book' as it might appear in any other XML document.
If I choose a namespace for my schema, I need to be absolutely sure it won't be the same as a namespace that anyone else might choose. In order to make sure that this clashing won't happen, authors usually make their namespaces begin with a domain name owned by them. This makes them look like URLs, but there is no requirement actually to make the namespace URI actually point at a useful resource, or indeed anything at all. Helpful organisations, like the w3c, actually post resources at these URIs, which is why the external namespaces in the document above worked.
Related
The source of the claim is in the Docutils documentation, at https://docutils.sourceforge.io/docs/ref/rst/directives.html#include:
Warning
The "include" directive represents a potential security hole. It can be disabled with the "file_insertion_enabled" runtime setting.
What exactly is it that I should be concerned about, and if it's a potential security hole why hasn't it been removed?
"if it's a potential security hole why hasn't it been removed?" Then you can use any programming language to write malware, and why they still exist? You should learn the right attitude to discuss security topics, as often you need to accept some risks and build up fences against them.
To understand the security risks, you need to build up a specific context, such as generating a web site from Sphinx files.
If you choose another context such as generating PDF files, the risks can be different.
First, in .rst files people can write bad JavaScript code in raw directive,
.. raw:: html
<script>
function badCode() {
alert("I am bad code");
}
badCode();
</script>
Second, include directive allows you to include files you might not fully controlled (for example, a file from an external storage, or simply out of your working directory).
So, if you didn't pay enough attention to what you include and someone really hacks the contents included in your Sphinx files, then the final generated sites can contain malicious code and harm the end users who view those pages.
To minimize such risks, clearly
Sphinx allows you to disable include completely as that documentation page says, but you lose this useful feature.
You can include only contents that you trust (from the same Git repository for example), because you know a good enough code review can reduce risks from there.
If you still want to include external contents, set up procedures to actively verify the actual contents before generating the web site.
In your online documentation regarding domain services ("https://aspnetboilerplate.com/Pages/Documents/Domain-Services") you have section called "How do we force to use of the Domain Service?"
In there you imply that there is a lot of external documentation around the concept of injecting a kind of "policy" service into the entity as a way to do this, but the artice is kind of vague on the implementation of that class, along with where it should be injected and how it is used. I have been scouring the Internet for examples of this kind of design to force the use of domain services, but haven't been able to find anything.
Just browsing that documentation leaves too many questions..
Additionally, I was hoping I could find a simple implementation of an Abp that provided an example of this, but could not find anything.
I'm very curious about this because I have found it to be a big problem with large projects in the past: developers writing their own code in the application service layer, not knowing that the capabilities were already provided in some domain driven "Manager" service.
Can you provide a quick small sample of this concept fully implemented? Using Abp would be great, but a generic example would be fine as well.
take care,
jasen
A few thoughts:
The Policy pattern in that code plays no role in forcing use of the domain service. It is only making task assignment more modular and more SRP compliant.
There's only so much defensive programming can do for you. Sure, making AssignedPersonId protected so that it cannot be directly assigned is a good thing, but a programmer could just as well change it back to public. Don't rely too much on technical code to prevent bad developer behavior - shared practices and team culture are much more efficient.
Questioning application sample or template code (as you do) is sound. Don't take that code as gospel truth - it was never meant to be exemplary in the first place. Try your own stuff and learn from your mistakes. Experience is not something that can be transmitted through such a document.
I'm working on my first "real" DDD application.
Currently my client does not have access to my domain layer and requests changes to the domain by issuing commands.
I then have a separate (flattened) read model for displaying information (like simple CQRS).
I'm now working on configuration, or specifically, settings that the user configures. Using a blog application as an example, the settings might be the blog title or logo.
I've developed a generic configuration builder that builds a strongly typed configuration object (e.g. BlogSettings) based on a simple key value pair collection. I'm stuck on whether these configuration objects are part of my domain. I need access to them from the client and server.
I'm considering creating a "Shared" library that contains these configuration objects. Is this the correct approach?
Finally where should the code to save such configuration settings live? An easy solution would be to put this code in my Domain.Persistence project, but then, if they are not part of the domain, should they really be there?
Thanks,
Ben
User configurable settings belong to domain if they are strongly typed and modeled based on ubiquitous language, i.e. 'BlogSettings'. The only difference between settings and other domain objects is that conceptually settings are 'domain singletons'. They don't have a life cycle like other Entities and you can only have one instance.
Generic configuration builder belongs to Persistence just like the code that is responsible for saving and reading settings.
Having found this question, I feel obliged to suggest an answer because I don't like the accepted one.
First off I don't see why this has to be a singleton.
Secondly, there is something about settings that is very important: they are usually hierarchical, and they almost always have to have the concept of defaults. Sometimes those defaults are at the item level. Other times you might prefer to replicate a whole set of defaults. Also, consider the fact that settings can use the concept of inheritance: maybe an agency has a setting, but it permits agents the ability to do their own.
I with to setup a openid server which will support complex attributes that are not defined (http://www.axschema.org/types/) list or in experimental list. Attributes could be detailed information about his work like his reporting boss id, etc. My openid server and client both are within my control and are not supposed to be exposed to the internet.
Is it possible to create this environment within OpenId protocol? If yes, please suggest which servers support complex attributes if any.
The attribute exchange protocol is pretty straightforward:
http://openid.net/specs/openid-attribute-exchange-1_0.html
You'll undoubtedly need to modify it to support these non-standard fields (doubly so because you're probably pulling the data from LDAP or some other database), but it shouldn't be hard.
(As for attribute exchange itself, almost all of the open-source implementations support this.)
(From the point of view of a user, not how it's built or which option is selected in Visual Studio)
...What is the difference between a "website" and a "web application"?
Is there a difference?
Are there characteristics that characterise the two?
Software applications are software tools designed to help the user perform specific tasks. Web applications simply provide a software application through a web interface. Think Google Docs as a typical example, but web applications can be much simpler.
On the other hand, a website can be regarded as just a collection of related digital assets (documents, images, videos, etc), relative to a common URL.
(Note: I take the definition of a website from Wikipedia and deduce a definition of web applications from that (or, better, define differences between the two concepts). Everything in bold face is meant, put together, to build the definition of a web application.)
Starting with the fundamentals: Is a web application a subset of a website? Following Wikipedia's definition of a website, that Daniel Vassallo has layed out in his answer, a website is a bunch of documents under a common URL. This also follows the definition in the Cambridge dictionary.
A web application, on the other hand, is a bunch of web-based dynamic HTML and JS documents, together with images, CSS files and other documents, that is most probably, but not exclusively located under a single URL. The purpose of a web application comes below.
Hence we can state: If a web application is located on a single server only, without using client-side cross-domain techniques or extensive local storage (which I'd like to define here as everything beyond standard cookies and default caching), it is also a website.
Corollary: There can be web applications, that are not websites.
Hence we have to extend the definition of web application: A web application, under certain circumstances being a website, is a set of interactive documents. Interactive thereby means, that the user can do more than just follow hyperlinks to get from resource to resource. She can actively and in a well-defined manner change the state of resources. The web application is, for this task, not confined to a single server, or to the server side at all.
Now we yet have to define, where a web application ends and quite anything else starts. Therefore we state: A web application has always an entry point, that is located at a website. If it has multiple entry points, they must all together be part of the same website.
qed
I am open for any suggestion on how this epic piece of wisdom could be refined to meet the requirements of reality. ;-)
Clarification:
This answer is in no way disrespectful to the question. However, I took a semi-serious approach, by which I mean, that the provided definition may or may not fit into one's personal idea of what a web application is compared to a website, but (and that is the serious part) is based on and deduced from a (possibly random) collection of facts.
Clarification 2: This answer has nothing to do with Visual Studio.