Let's say we have the following two entries in our zonefile for mydomain.de
* IN A 35.234.100.200
mytest.test IN A 35.234.100.201
We would now expect that example.mydomain.de resolves to 35.234.100.200.
Instead it does not resolve to anything. Why is that?
Note that when we remove the second entry, the wildcard entry does apply and example.mydomain.de resolves to 35.234.100.200.
Update:
Our bad. The above shown configuration works as expected. We seem to have seen some other unrelated issue.
Related
Using Terraform to configure vSphere vms, I'd like to be able to provide an IP address (and gateway and netmask) in the tfvars file, but have the vm default to using DHCP if the values are not provided. I know it will use DHCP if the 'vsphere_virtual_machine' resources' 'customize' block contains an empty 'network_interface' block. I was hoping that be giving a default value of "" to the settings in the variables.tf file I could set values if present and use DHCP if not, but I get an error stating:
Error: module.vm.vsphere_virtual_machine.node:
clone.0.customize.0.network_interface.0.ipv4_netmask: cannot parse ''
as int: strconv.ParseInt: parsing "": invalid syntax
So putting in a blank string won't parse, and it won't just leave the whole network_interface blank if the values are blank.
I can't use COUNT on a subresource, so the only thing I've come up with so far is to put two entire, nearly identical, 'vsphere_virtual_machine' resources into my module and then put COUNT statements on both so only one gets created, depending on whether the network settings are provided or not, but man, does that seem ugly...?
I think you are in luck. I've been waiting for this exact same problem to be solved since almost a year now.
Lo and behold, Terraform v0.12.0-alpha1:
They now support dynamic block definitions instead of just static ones
Enjoy, while I'm gonna throw away a couple of hundreds of lines worth of hacks just like the one you mentioned...
Unfortunately we have a special folder named "_archive" in our repository everywhere.
This folder has its purpose. But: When searching for content/documents we want to exclude it and every content beneath "_archive".
So, what i want is to exclude the path and its member from all user searches. Syntax is easy with fts:
your_query AND -PATH:"//cm:_archive//*"
to test:
https://www.docdroid.net/RmKj9gB/search-test.pdf.html
take the pdf, put it into your repo twice:
/some_random_path/search-test.pdf
/some_random_path/_archive/search-test.pdf
In node-browser everything works as expected:
TEXT:"HODOR" AND -PATH:"//cm:_archive//*"
= 1 result
TEXT:"HODOR"
= 2 results
So, my idea was to edit search.get.config.xml and add the exclusion to the list of properties:
<search>
<default-operator>AND</default-operator>
<default-query-template>%(cm:name cm:title cm:description ia:whatEvent
ia:descriptionEvent lnk:title lnk:description TEXT TAG) AND -PATH:"//cm:_archive//*"
</default-query-template>
</search>
But it does not work as intended! As soon as i am using 'text:' or 'name:' in the search field, the exclusion seems to be ignored.
What other option do i have? Basically just want to add the exclusion to the base query after the default query template is used.
Version is Alfresco Community 5.0.d
thanks!
I guess you're mistaken what query templates are meant for. Take a look at the Wiki.
So what you're basically doing is programmatically saying I've got a keyword and I want to match the keywords to the following metadata fields.
Default it will match cm:name cm:title cm:description etc. This can be changed to a custom field or in other cases to ALL.
So putting an extra AND or here of whatever won't work, cause this isn't the actual query which will be built. I can go on more about the query templates, but that won't do you any good.
In your case you'll need to modify the search.get webscript of Alfresco and the method called function getSearchResults(params) in search.lib.js (which get's imported).
Somewhere in at the end of the method it will do the following:
ftsQuery = '(' + ftsQuery + ') AND -TYPE:"cm:thumbnail" AND -TYPE:"cm:failedThumbnail" AND -TYPE:"cm:rating" AND -TYPE:"st:site"' + ' AND -ASPECT:"st:siteContainer" AND -ASPECT:"sys:hidden" AND -cm:creator:system AND -QNAME:comment\\-*';
Just add your path to query to it and that will do.
For example:
If Marathon is running a task named /cassandra, Mesos-DNS assigns it a DNS name - cassandra.marathon.mesos.
Now I have a task named /monit/promdash. How can I find its DNS name?
Already tried:
monit_promdash.marathon.mesos, promdash_monit.marathon.mesos (and with - instead of _), monit.marathon.mesos, promdash.marathon.mesos, ...)
There's a HTTP interface. Couldn't find how to list all DNS names either...
Thanks,
Marathon reverses the hierarchical names, concatenates them with - and this is the app name then, so in your case it would be promdash-monit.marathon.mesos. Try it out.
At the bottom of the Mesos-DNS naming documentation we provide some more details about how these FQHN are constructed and you can also check out a complete end-to-end example I've put together, using two levels of hierarchies.
We are trying to forward all emails to a specific email address. I think everything is set up okay, such as 'main.cf' and 'virtual-regexp' files. If we put the following in the 'virtual' file, the forwarding works correctly:
#ourmail.com mainid#ourmail.com
However, if we try to use the following in 'virtual' to send ALL email to the ID, it ignores it and sends it to the original user:
(.*) mainid#ourmail.com
We got the idea for the above from the following question and answer:
postfix 2.9.6.1 forward all mail to an external mail address
Any ideas why the pattern '(.*)' doesn't work? We've tried so many different patterns that our heads are starting to spin.
we solved the issue.
You need to complete the steps listed in the link above. But, in addition, you need to comment out the following lines in main.cf (if they are there) before restarting the postfix process:
virtual_alias_maps = hash:/etc/postfix/virtual
virtual_alias_domains = hash:/etc/postfix/virtual
I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel