I'm trying to import an impex with html and it appears in the webpage without the class. For example this h1 class (content-page__title) doesn't appear.
UPDATE CMSParagraphComponent; $contentCV[unique = true]; uid[unique = true]; content[lang = $lang]
; ; CMSParagraph-FrequentQuestionsPage ; "<h1 class='content-page__title'>Preguntas frecuentes</h1>"
You are using incorrect pair of quotes....
Use like below,
UPDATE CMSParagraphComponent; $contentCV[unique = true]; uid[unique = true]; content[lang = $lang]
; ; CMSParagraph-FrequentQuestionsPage ; "<h1 class=""content-page__title"">Preguntas frecuentes</h1>"
Also you can use html encoding like " ;
refer below link,
Escape_Double_Quotes
Related
I am using PHP Mailer.
I want to add dynamic content in php to the body but I am not being able to do it.
If I add this on the php mailer body, it works.
$mail->Body="<h3>Dados do 1º Titular</h3><ul><li><b>Nome: </b>".$titular1_name."</li><li><b>Idade: </b>".$titular1_idade."</li><li><b>Estado Civil</b>".$titular1_estado_civil."</li><li><b>Dependentes</b>".$titular1_dependentes."</li><li><b>Rendimento</b>".$titular1_rendimento."</li><li><b>Penhora</b>".$titular1_penhora."</li></ul><h3>Dados do 2º titular</h3><ul><li><b>Nome: </b>".$titular2_name."</li><li><b>Idade: </b>".$titular2_idade."</li><li><b>Estado Civil</b>".$titular2_estado_civil."</li><li><b>Dependentes</b>".$titular2_dependentes."</li><li><b>Rendimento</b>".$titular2_rendimento."</li><li><b>Penhora</b>".$titular2_penhora."</li></ul><h3>Dados de Contacto</h3><ul><li><b>Email: </b>".$titular_email."</li><li><b>Telefone</b>".$titular_phone."</li><li><b>Localidade</b>".$titular_local."</li></ul>";
But I need to add this result:
for ($y = 0; $y <= $numberOfRows; $y++){
for ($x = 0; $x < $maxData; $x++) {
array_push($row, $credito,
$capital_divida, $prestacao,
$bank, $garantias, $situacao);
echo $row[$x]."<br>";
}
array_push($fullData, $row);
}
This is my relevant code for this issue:
$row = array();
$fullData = array();
$maxData = 6;
$numberOfRows = htmlspecialchars(["numberOfRowsHTML"]);
$titular_email = htmlspecialchars($_POST["titular-email"]);
$titular_phone = htmlspecialchars($_POST["titular-phone"]);
$titular_local = htmlspecialchars($_POST["titular-local"]);
$titular_message = htmlspecialchars($_POST["titular-message"]);
$numberOfRows = htmlspecialchars(["numberOfRowsHTML"]);
$credito = htmlspecialchars(["credito"]);
$capital_divida = htmlspecialchars(["capital_div"]);
$prestacao = htmlspecialchars(["prestacao"]);
$bank = htmlspecialchars(["bank"]);
$garantias = htmlspecialchars(["garantias"]);
$situacao = htmlspecialchars(["situacao"]);
Thank you in advance for all the help
This code doesn't make any sense:
$numberOfRows = htmlspecialchars(["numberOfRowsHTML"]);
It looks like you're missing the name of an array you want to get this value from, for example:
$numberOfRows = htmlspecialchars($_POST["numberOfRowsHTML"]);
Your loop that assembles the array doesn't appear to do anything useful at all - you're getting values from the submission, then assembling an array which you don't use. It would be helpful if you could clarify what it is you're trying to do.
I'm trying to track the history of field values that are displayed in computed field as HTML. So far I got this:
var x = document1.getItemValue("category");
var html = "<table>";
for (i = 0 ; i < x.size(); i++){
html= html + "<tr><td>" + x + "</td></tr>";
html = html + "<tr><td>" + session.getEffectiveUserName() + "</td></tr>";
}
html = html + "</table>";
The code works ok, I get the value that I need and it gets displayed and if I edit the current document, the value changes via partial update that I have attached to the save button, but that's not the problem. The problem that I have is with saving it. I thought of creating an array and adding the value that changed but it will reset everything because of the script. Any suggestions how I can save that value or add it to the field? I was using Append ToTextList in forms, is there any way how to achieve that functionality in Xpages?
You can add a new value in a field in the querySave of the DominoDocument in XPages:
var x = document1.getItemValue("category");
x.add(myNewValue);
document1.replaceItemValue("category", x);
Chris Toohey just posted a blog article that seems to be what you're looking for.
http://www.dominoguru.com/page.xsp?id=thoughts_on_future_proofing_notesdata_for_application_development.html
I'm trying to create a link to a document in SharePoint 2010 using PowerShell 2.0. I've already enabled other content types and added the 'Link to a Document' content type to the document library in question.
The document that I'm trying to link to is in a different shared document list on another web in the same site collection. The actual file is nested in a subfolder called "PM". The file types may range from excel files to word files to PDFs.
I've tested the process manually (Shared Documents -> New Document -> Link to a Document -> ...) and it worked fine (as was indicated by the document icon with an arrow over the bottom right corner when I was done), but I cannot seem to get it to work with PowerShell. Any ideas?
This is the only non-PowerShell solution that I've come accross so far:
http://howtosharepoint.blogspot.com/2010/05/programmatically-add-link-to-document.html
I got it working finally by porting the aforementioned solution. There's superfluous detail here, but the gist of it is easy enough to parse out:
# Push file links out to weekly role page
$site = New-Object Microsoft.SharePoint.SPSite($roleURL)
$web = $site.OpenWeb()
$listName = "Shared Documents"
$list = $web.Lists[$listName]
$fileCollection = $list.RootFolder.files
ForEach ($doc in $docLoadList) {
$parsedFileName = $d.Split("\")
$index = $parsedFileName.Length
$index = $index - 1
$actualFileName = $parsedFileName[$index]
$existingFiles = Get-ExistingFileNames $homeURL
If ($existingFiles -Match $actualFileName) {
$existingFileObject = Get-ExistingFileObject $actualFileName $homeURL
$docLinkURL = Get-ExistingFileURL $actualFileName $homeURL
# Create new aspx file
$redirectFileName = $actualFileName
$redirectFileName += ".aspx"
$redirectASPX = Get-Content C:\Test\redirect.aspx
$redirectASPX = $redirectASPX -Replace "#Q#", $docLinkURL
$utf = new-object System.Text.UTF8Encoding
$newFile = $fileCollection.Add($redirectFileName, $utf.GetBytes($redirectASPX), $true)
# Update the newly added file's content type
$lct = $list.ContentTypes["Link to a Document"]
$lctID = $lct.ID
$updateFile = $list.Items | ? {$_.Name -eq $redirectFileName}
$updateFile["ContentTypeId"] = $lctID
$updateFile.SystemUpdate()
$web.Dispose()
}
}
I may end up stringing together the .aspx file in the script too at some point...
I use EWS to get exchange emails, but how can i get plain text from email body, without html?
Now i use this:
EmailMessage item = (EmailMessage)outbox.Items[i];
item.Load();
item.Body.Text
In the PropertySet of your item you need to set the RequestedBodyType to BodyType.Text. Here's an example:
PropertySet itempropertyset = new PropertySet(BasePropertySet.FirstClassProperties);
itempropertyset.RequestedBodyType = BodyType.Text;
ItemView itemview = new ItemView(1000);
itemview.PropertySet = itempropertyset;
FindItemsResults<Item> findResults = service.FindItems(WellKnownFolderName.Inbox, "subject:TODO", itemview);
Item item = findResults.FirstOrDefault();
item.Load(itempropertyset);
Console.WriteLine(item.Body);
In powershell:
.........
$message = [Microsoft.Exchange.WebServices.Data.EmailMessage]::Bind($event.MessageData,$itmId)
$PropertySet = New-Object Microsoft.Exchange.WebServices.Data.PropertySet([Microsoft.Exchange.WebServices.Data.BasePropertySet]::FirstClassProperties)
$PropertySet.RequestedBodyType = [Microsoft.Exchange.WebServices.Data.BodyType]::Text
$message.Load($PropertySet)
$bodyText= $message.Body.toString()
I had the same issue. All you have to do is set RequestedBodyType property of the property set you are using.
PropertySet propSet = new PropertySet(BasePropertySet.IdOnly, EmailMessageSchema.Subject, EmailMessageSchema.Body);
propSet.RequestedBodyType = BodyType.Text;
var email = EmailMessage.Bind(service, item.Id, propSet);
The shortest way to do it is like this:
item.Load(new PropertySet(BasePropertySet.IdOnly, ItemSchema.TextBody, EmailMessageSchema.Body));
This has got the advantage that you get both, text-body and html-body.
you can use
service.LoadPropertiesForItems(findResults, itempropertyset);
to load properties for all items
I am trying to use Powershell to run through a list of wiki pages that were originally on a seperate wiki site. I want to migrate these to a Sharepoint 2010 Wiki site. I have a Powershell script that goes through all the files, creates a wiki page for each, sets the Page Content to the body HTML of the old page, and updates the itemm, but for some reason none of the layout is shown. The page is just blank with no boxes shown in Edit mode or Sharepoint Designer. Here is my script:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$SPSite = New-Object Microsoft.SharePoint.SPSite("http://dev-sharepoint1/sites/brandonwiki");
$OpenWeb = $SpSite.OpenWeb("/sites/brandonwiki");
$List = $OpenWeb.Lists["Pages"];
$rootFolder = $List.RootFolder
$OpenWeb.Dispose();
$SPSite.Dispose()
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)}
foreach ($file in $files) {
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm"))
$strDestURL="/sites/brandonwiki"
$strDestURL+=$rootFolder
$strDestURL+="/"
$strDestURL+=$name;
$strDestURL+=".aspx"
$Page = $rootFolder.Files.Add($strDestUrl, [Microsoft.SharePoint.SPTemplateFileType]::WikiPage)
$Item=$Page.Item
$Item["Name"] = $name
$cont = Get-Content ./testEx/$file
$cont = $cont.ToString()
$Item["Page Content"] = $cont
$Item.UpdateOverwriteVersion();
}
Here is what a good page looks like in edit mode that was added manually:
http://imageshack.us/photo/my-images/695/goodpage.png/
And what the pages look like done through Powershell:
http://imageshack.us/photo/my-images/16/badpage.png/
I found the answer. The script needs to create a Publishing Page instead of a List item. Then it will add the appropriate Wiki Page Layout and setup everything for you. Here is my new script that works for anyone that may also have this issue - Note it also replaces the doku links with Sharepoint ones - :
##################################################################################################################
# Wiki Page Creater #
##################################################################################################################
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") #Setup Sharepoint Functions
#Commonly Used Variables To Set
$baseSite = "http://sharepointSite" #Main Sharepoint Site
$subSite = "/sites/subSite" #Sharepoint Wiki sub-site
$pageFolder = "Pages" #Directory of Wiki Pages
$fileFolder = "Media" #Directory of Files
## Setup Basic sites and pages
$site = New-Object Microsoft.SharePoint.SPSite($baseSite+$subSite) #Sharepoint Site
$psite = New-Object Microsoft.SharePoint.Publishing.PublishingSite($baseSite+$subSite) #Publishing Site
$ctype = $psite.ContentTypes["Enterprise Wiki Page"] #Get Enterprise Wiki page content type
$layouts = $psite.GetPageLayouts($ctype, $true) #Get Enterprise Wiki page layout
$layout = $layouts[0] #Choose first layout
$web = $site.OpenWeb(); #Site.Rootweb
$pweb = [Microsoft.SharePoint.Publishing.PublishingWeb]::GetPublishingWeb($web) #Get the Publishing Web Page
$pages = $pweb.GetPublishingPages($pweb) #Get collection of Pages from webpage
## Get files in exported folder and parse them
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)} #Get all files recursively
foreach ($file in $files) { #Foreach file in folder(s)
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm")) #Get file name, stipped of extension
#Prep Destination url for new pages
$strDestURL=$subSite+$pageFolder+"/"+$name+".aspx"
#End Prep
$page = $pages.Add($strDestURL, $layout) #Add a new page to the collection with Wiki layout
$item = $page.ListItem #Get list item of the page to access fields
$item["Title"] = $name; #Set Title to file name
$cont = Get-Content ./testEx/$file #Read contents of the file
[string]$cont1 = "" #String to hold contents after parsing
### HTML PARSING
foreach($line in $cont){ #Get each line in the contents
#############################################
# Replacing Doku URI with Sharepoint URI #
#############################################
## Matching for hyperlinks and img src
$mat = $mod -match ".*href=`"/Media.*`"" #Find href since href and img src have same URI Structure
if($mat){ #If a match is found
foreach($i in $matches){ #Cycle through all matches
[string]$j = $i[0] #Set match to a string
$j = $j.Substring($j.IndexOf($fileFolder)) #Shorten string for easier parsing
$j = $j.Substring(0, $j.IndexOf("amp;")+4) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf("id="))+$j.Substring($j.IndexOf("&")+5)) #Replace all occurances of original with two sections
}
}
## Matching for files and images
$mat = $mod -match "`"/Media.*`"" #Find all Media resources
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
$j = $j.Substring(0, $j.IndexOf("class")) #Shorten string for easier parsing
$j = $j.Substring($j.IndexOf($fileFolder)) #Sorten
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($fileFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
$mod = $mod.Replace("/"+$fileFolder, $subSite+"/"+$fileFolder+"/") #make sure URI contains base site
## Matching for links
$mat = $mod -match "`"/Page.*`"" #Find all Page files
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
if($j -match "start"){ #If it is a start page,
$k = $j.Replace(":", "-") #replace : with a - instead of removing to keep track of all start pages
$k = $k.Replace("/"+$pageFolder+"/", "/"+$pageFolder) #Remove following / from Pages
$mod = $mod.Replace($j, $k) #Replace old string with the remade one
}
else{ #If it is not a start page
$j = $j.Substring(0, $j.IndexOf("class")) #Stop the string at the end of the href so not to snag any other :
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Now limit down to last :
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($pageFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
[string]$j = $mod #Set the match to a new string
$j = $j.Substring(0, $j.IndexOf("class")) #Stop at class to limit extra "
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf("`"")) #Grab ending "
$rep = $j+".aspx" #Add .aspx to the end of URI
$mod = $mod.Replace($j, $rep) #Replaced old URI with new one
}
$mod = $mod.Replace("/"+$pageFolder, $subSite+"/"+$pageFolder+"/") #Add base site to URI
$cont1 += $mod #Add parsed line to the new HTML string
}
##### END HTML PARSING
$item["Page Content"] = $cont1 #Set Wiki page's content to new HTML
$item.Update() #Update the page to set the new fields
}
$site.Dispose() #Dispose of the open link to site
$web.Dispose() #Dispose of the open link to the webpage