How to disable downloading of Confluence attachments during business hours - hide

Confluence wiki has an excellent method for storing attachments where hundreds of files can be uploaded to a single page. Each page is like a folder where the page content and attachments are in the folder. Attachments can be downloaded by going to Tools > Attachments and selecting the file to download. To make things easier there is an option to Download All. This zips all of the files up and sends it to the user who sees it as a .zip with a random name.
There is a problem with pages with large numbers of attachments for which users download during working hours causing heavy load on the server. A method for disabling the download function during business hours or disabling large zip files from being created during business hours is needed.

Edit confluence/confluence/pages/viewattachments.vm and change the logic for displaying the Download all link so that it only displays outside of buiness hours.
Code to add:
## Only display Download All link during business hours ##
#set ($tdate = $action.dateFormatter.getCurrentDateTime() ) ## Get todays date
#set ($thour = $tdate.replaceAll(".* ([0-9]+):..", "$1") ) ## Extract the current hour value from the datetime string
#if (!$thour.matches("8|9|10|11|12|13|14"))
#if ($action.latestVersionsOfAttachments.size() > 1)
<a id="download-all-link" href="$req.contextPath/pages/downloadallattachments.action?pageId=$pageId" title="$action.getText('download.all.desc')">$action.getText('download.all')</a>
#end
#else
Download all attachments option only available outside of business hours.
#end ## $thour.matched end
This is the block of code to change:
#if ($latestVersionsOfAttachments.size() > 0)
#set ($contextPath = "$req.contextPath/pages/viewpageattachments.action?pageId=$pageId&sortBy=$sortBy&")
#set ($sortPathPrefixHtml = "?pageId=$page.id&sortBy=")
#set ($showActions = "true")
#set ($old = "true")
#set ($attachmentHelper = $action)
#parse("/pages/includes/attachments-table.vm")
#if ($action.latestVersionsOfAttachments.size() > 1)
<a id="download-all-link" href="$req.contextPath/pages/downloadallattachments.action?pageId=$pageId" title="$action.getText('download.all.desc')">$action.getText('download.all')</a>
#end
#pagination($action.paginationSupport $contextPath)
#else
$action.getText('currently.no.attachments')
#end
Here is the change so that the Download All link is only displayed outside of business hours:
#if ($latestVersionsOfAttachments.size() > 0)
#set ($contextPath = "$req.contextPath/pages/viewpageattachments.action?pageId=$pageId&sortBy=$sortBy&")
#set ($sortPathPrefixHtml = "?pageId=$page.id&sortBy=")
#set ($showActions = "true")
#set ($old = "true")
#set ($attachmentHelper = $action)
#parse("/pages/includes/attachments-table.vm")
## Only display Download All link during business hours ##
#set ($tdate = $action.dateFormatter.getCurrentDateTime() ) ## Get todays date
#set ($thour = $tdate.replaceAll(".* ([0-9]+):..", "$1") ) ## Extract the current hour value from the datetime string
#if (!$thour.matches("8|9|10|11|12|13|14"))
#if ($action.latestVersionsOfAttachments.size() > 1)
<a id="download-all-link" href="$req.contextPath/pages/downloadallattachments.action?pageId=$pageId" title="$action.getText('download.all.desc')">$action.getText('download.all')</a>
#end
#else
Download all attachments option only available outside of business hours.
#end ## $thour.matched end

Related

Any way to use external list of color hex codes to change layer fill in Photoshop?

I have a list of 100 different color hex codes.
I want to create 100 different PNG files, that each use a different color from this list.
Apparently I cannot use variables in Photoshop, so I am looking for another way, since I am not a scripting guru.
If scripting is the only way, is there a simple language to leverage like VB, Powershell, etc., versus trying to learn Javascript, et al?
Thank you
This is the general premise, for each image change hexcol:
var hexcol = "ff00ff"; // change this to alter the colour
var col = new SolidColor;
col.rgb.hexValue = hexcol;
app.activeDocument.selection.fill(col, ColorBlendMode.NORMAL, 100, false);
Maybe learning JavaScript would ultimately allow a simplified approach, but this is what I did.
First I created a CSV file of the color hex codes I wanted:
filename
red
green
blue
BackGround1
255
255
255
Then I created a PowerShell script to generate a PNG file for each color:
# Specify the path of the Excel or csv data file
$FilePath = "C:\...\BackgroundColors.csv"
# Specify the starting and ending data rows
$RowStart = 2
$RowEnd = 101
# Specify the Sheet name
$SheetName = "BackgroundColors"
# Create an Object Excel.Application using Com interface
$ObjExcel = New-Object -ComObject Excel.Application
# Disable the 'visible' property so the document won't open in excel
$ObjExcel.Visible = $false
# Open the Excel file and save it in $WorkBook
$WorkBook = $ObjExcel.Workbooks.Open($FilePath)
# Load the WorkSheet
$WorkSheet = $WorkBook.sheets.item($SheetName)
# Set Image file SaveTo location
$SaveTo ="C:\...\Backgrounds"
# Instatiate Image Com Interface
Add-Type -AssemblyName System.Drawing
#Loop
for ($counter = $RowStart; $counter -le $RowEnd; $counter++ )
{
# Set Image variable
$SaveAs = $WorkSheet.Range("A$counter").text
$Red = $WorkSheet.Range("B$counter").text
$Green = $WorkSheet.Range("C$counter").text
$Blue = $WorkSheet.Range("D$counter").text
$FileName = "$SaveTo\$SaveAs.png"
#Create bitmap
$bmp = new-object System.Drawing.Bitmap 750,750
#Create brush with color
$brush = [System.Drawing.SolidBrush]::New([System.Drawing.Color]::FromArgb(255, $Red, $Green, $Blue))
$graphics = [System.Drawing.Graphics]::FromImage($bmp)
$graphics.FillRectangle($brush,0,0,$bmp.Width,$bmp.Height)
$graphics.Dispose()
$bmp.Save($FileName)
}
$workbook.Close($false)
$objExcel.Quit()
# release Com objects
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($Worksheet) | Out-Null
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($Workbook) | Out-Null
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($objExcel) | Out-Null
# Force garbage collection
[System.GC]::Collect()
I understand there are probably better ways, instead of using the Excel file approach, but I wanted to learn how to leverage Excel with PowerShell.
Then in PhotoShop I created a template, with a layer that I could replace with each of the files I created, along with other files to compose the complete image I wanted:
File > New
Image > Variables > Define > Pixel Replacement
I created another CSV file with the paths to the image files I created:
newfilename
background path
image path
NewImageName
C:\...\BackGround1.png
C:\...\Image1.png
Then in Photoshop I imported the CSV file as a data set:
Image > Variables > Data Sets > Import
Then exported the data set as files, which created PSD files:
File > Export > Data Sets as Files
Then I created a Photoshop Action to save a PSD file as a PNG file:
Window > Actions > Create New Action > Record
Then I ran the Action as a Batch against all the PSD files:
File > Automate > Batch > MyActions
There are certainly more details involved than presented here, but this approach enabled me to automate the creation of the backgrounds, then combine them with other images, to ultimately create a variety of new images quickly. It wasn't quick getting the whole process worked out, but now it is quickly repeatable, and over time can be improved with additional knowledge and capabilities.

Header menu included in exported excel sheet in wordpress front pages

$now = gmdate('D, d M Y H:i:s') . ' GMT';
$filename ="analysis_report.xls";
header('Content-type: application/ms-excel');
header('Expires: ' . $now);
header('Content-Disposition: attachment; filename='.$filename);
header('Pragma: no-cache');
require(SB_PATH."views/export_analysis/analysis_report_export.php");
exit;
I have to export dynamic html table data in wordpress front users data comes fine but header menu also get included in excel sheet
Same code working fine in wordpress backend.
Here is screenshot
https://prnt.sc/qyw9v6
Please suggest to correct this.
Your question is a little vague... but the answer to why you are seeing the csv in every file is because you're including it after headers are sent. This means you need to hook your function into init or admin_init as the case may be.
Also.. You want to only fire the csv export function when the button is clicked. What I do is something like this.
// CSV Export
if (isset($_REQUEST['do']) && $_REQUEST['do'] == 'download' && $_GET['page'] == 'your-page-slug'){
add_action('admin_init', 'download_csv_file');
}
I fire the csv export with a button on the admin page like this.
<a class="button-primary" href="<?php echo admin_url('admin.php?page=your-page-slug&do=download');?>&_wpnonce=<?php echo wp_create_nonce( 'download_csv' )?>">Download Existing Data</a>
I believe with the information above, you should be able to solve your problem.

Converting the Excel properties to a Number Format

I have this automation script that will take a .htm file and generate a custom object to be generated into a spreadsheet.
Unfortunately, one of the items in my object is a very long number, so when it does get exported to a .xlsx, it looks like this:
1.04511E+11
I understand that just changing the format to a number with no decimals is possible within Excel, but I wanted to know if there is a way I can change the format within my script; especially since this is intended to be an automated process.
Here is the segment of my script:
## Build custom object which allows for modification of the spreadsheet that will be generated
$xls = ForEach ($item in $spreadsheet | Where-Object {$PSItem.P4 -match '\w+'}) {
[pscustomobject]#{
## Define the colums for the CSV file
'MnsTransTy' = $item.P1.TrimStart()
'Del. Date' = $item.P2
'Time' = $item.P3
'Name 1' = $item.P4
'Purch.Doc.' = $item.P5
## Remove white space
'Cases' = $item.P6.TrimStart()
## Remove white space
'line' = $item.P7.TrimStart()
'Name' = $item.P8
## Remove white space
'Tot Pallet' = $Item.P9.TrimStart()
}
}
The item in question is P5. I am using the ImportExcel Module which is found here: https://github.com/dfinke/ImportExcel.
Any help on this would be greatly appreciated! Thanks in advance!
This is probably happens because you are getting values from cells as string data type.
You could try to specify explicitly the data type(Double in your case). Like so:
[pscustomobject]#{
## Define the colums for the CSV file
'MnsTransTy' = $item.P1.TrimStart()
'Del. Date' = $item.P2
'Time' = $item.P3
'Name 1' = $item.P4
'Purch.Doc.' = [Double]$item.P5
## Remove white space
'Cases' = $item.P6.TrimStart()
## Remove white space
'line' = $item.P7.TrimStart()
'Name' = $item.P8
## Remove white space
'Tot Pallet' = $Item.P9.TrimStart()
}

How to save manipulated WAV files in object list?

I have the following problem: I want to low-pass filter 240 WAV files. The script is running only till the low-pass filtered sounds are created and shown in the object list ("..._band"). However, Praat does not export them as WAV files. After choosing the output folder, I get the warning message "Command 'Get number of strings' not available for current selection".
In short, my question is how can I save the WAV sounds in the object list individually with their new file names? See also Screenshot.
Script see below.
Thank you very much for your help!
Greetings,
#determine praat version
ver1$ = left$(praatVersion$, (rindex(praatVersion$, ".")-1));
ver1 = 'ver1$'
if ver1 < 5.2
exit Please download a more recent version of Praat
endif
if ver1 == 5.2
ver2$ = right$(praatVersion$, length(praatVersion$) - (rindex(praatVersion$, ".")));
ver2 = 'ver2$'
if ver2 < 4
exit Please download a more recent version of Praat (minor)
endif
endif
beginPause ("Low-Pass Filter Instructions")
comment ("1. Select a folder containing the wave files to be low-pass filtered")
comment ("2. Wave files will be low-pass filtered (0 - 400 Hz)")
comment ("3. Select an output folder for the low-pass filtered wave files to be saved to")
comment ("Click 'Next' to begin")
clicked = endPause("Next", 1);
#wavefile folder path
sourceDir$ = chooseDirectory$ ("Select folder containing wave files")
if sourceDir$ == ""
exit Script exited. You did not select a folder.
else
sourceDir$ = sourceDir$ + "/";
endif
Create Strings as file list... list 'sourceDir$'/*.wav
numberOfFiles = Get number of strings
levels$ = ""
for ifile to numberOfFiles
select Strings list
currentList = selected ("Strings")
filename$ = Get string... ifile
Read from file... 'sourceDir$'/'filename$'
currentSound = selected ("Sound")
filterFreqRange = Filter (pass Hann band)... 0 400 20
select currentSound
Remove
endfor
select currentList
Remove
#output folder path - where the wave files get saved
outputDir$ = chooseDirectory$ ("Select folder to save wave files")
if outputDir$ == ""
exit Script exited. You did not select a folder.
else
outputDir$ = outputDir$ + "/";
endif
numberOfFiles = Get number of strings
for ifile to numberOfFiles
select Strings list
currentList = selected ("Strings")
filename$ = Get string... ifile
currentSound = selected ("Sound")
endif
Save as WAV file... 'outputDir$'/'filename$'
select currentSound
Remove
endfor
#clean up
select currentList
Remove
#clear the info window
clearinfo
#print success message
printline Successfully low-pass filtered 'numberOfFiles' wave files.
At a glance, the command is not available because when you get to that point in the script the selection is empty. It is empty because at the end of the first loop, you select your Strings object and then Remove it.
More generally, your script is not handling the object selection properly, which is a big problem in Praat because available commands change depending on the active selection.
So even if you remove the line where you Remove the list, you will bump into a problem the second time you run Get number of strings because by then you have changed the selection. And even if you remove that line (which you don't really need), you'll still bump into a problem when you run selected("Sound") after selecting the Strings object, because by then you won't have any selected Sound objects (you didn't have them before anyway).
A more idiomatic version of your script, which also runs on a single loop where sounds are read, filtered, and removed one by one (which is also more memory efficient) would look like this:
form Low-Pass Filter...
real From_frequency_(Hz) 0
real To_frequency_(Hz) 400
real Smoothing_(Hz) 20
comment Leave paths empty for GUI selectors
sentence Source_dir
sentence Output_dir
endform
if praatVersion < 5204
exitScript: "Please download a more recent version of Praat"
endif
if source_dir$ == ""
source_dir$ = chooseDirectory$("Select folder containing wave files")
if source_dir$ == ""
exit
endif
endif
if output_dir$ == ""
output_dir$ = chooseDirectory$("Select folder to save filtered files")
if output_dir$ == ""
exit
endif
endif
list = Create Strings as file list: "list", source_dir$ + "/*.wav"
numberOfFiles = Get number of strings
levels$ = ""
for i to numberOfFiles
selectObject: list
filename$ = Get string: i
sound = Read from file: source_dir$ + "/" + filename$
filtered = Filter (pass Hann band): from_frequency, to_frequency, smoothing
Save as WAV file: output_dir$ + "/" + selected$("Sound")
removeObject: sound, filtered
endfor
Although you might be interested to know that, if you do not need the automatic file reading and saving (ie, if you are OK with having your script work on objects from the list) your script could be reduced to
Filter (pass Hann band): 0, 400, 20
which works on multiple selected Sound objects.
If you absolutely need the looping (ie, if you'll be working on really large sets of files) you might also be interested in using the vieweach plugin, with was written to try and minimise this sort of boilerplate (full disclaimer: I wrote the plugin). I wrote a longer blog post about it as well.

How to get Sharepoint 2010 Wiki Page layouts to show with Powershell?

I am trying to use Powershell to run through a list of wiki pages that were originally on a seperate wiki site. I want to migrate these to a Sharepoint 2010 Wiki site. I have a Powershell script that goes through all the files, creates a wiki page for each, sets the Page Content to the body HTML of the old page, and updates the itemm, but for some reason none of the layout is shown. The page is just blank with no boxes shown in Edit mode or Sharepoint Designer. Here is my script:
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")
$SPSite = New-Object Microsoft.SharePoint.SPSite("http://dev-sharepoint1/sites/brandonwiki");
$OpenWeb = $SpSite.OpenWeb("/sites/brandonwiki");
$List = $OpenWeb.Lists["Pages"];
$rootFolder = $List.RootFolder
$OpenWeb.Dispose();
$SPSite.Dispose()
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)}
foreach ($file in $files) {
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm"))
$strDestURL="/sites/brandonwiki"
$strDestURL+=$rootFolder
$strDestURL+="/"
$strDestURL+=$name;
$strDestURL+=".aspx"
$Page = $rootFolder.Files.Add($strDestUrl, [Microsoft.SharePoint.SPTemplateFileType]::WikiPage)
$Item=$Page.Item
$Item["Name"] = $name
$cont = Get-Content ./testEx/$file
$cont = $cont.ToString()
$Item["Page Content"] = $cont
$Item.UpdateOverwriteVersion();
}
Here is what a good page looks like in edit mode that was added manually:
http://imageshack.us/photo/my-images/695/goodpage.png/
And what the pages look like done through Powershell:
http://imageshack.us/photo/my-images/16/badpage.png/
I found the answer. The script needs to create a Publishing Page instead of a List item. Then it will add the appropriate Wiki Page Layout and setup everything for you. Here is my new script that works for anyone that may also have this issue - Note it also replaces the doku links with Sharepoint ones - :
##################################################################################################################
# Wiki Page Creater #
##################################################################################################################
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") #Setup Sharepoint Functions
#Commonly Used Variables To Set
$baseSite = "http://sharepointSite" #Main Sharepoint Site
$subSite = "/sites/subSite" #Sharepoint Wiki sub-site
$pageFolder = "Pages" #Directory of Wiki Pages
$fileFolder = "Media" #Directory of Files
## Setup Basic sites and pages
$site = New-Object Microsoft.SharePoint.SPSite($baseSite+$subSite) #Sharepoint Site
$psite = New-Object Microsoft.SharePoint.Publishing.PublishingSite($baseSite+$subSite) #Publishing Site
$ctype = $psite.ContentTypes["Enterprise Wiki Page"] #Get Enterprise Wiki page content type
$layouts = $psite.GetPageLayouts($ctype, $true) #Get Enterprise Wiki page layout
$layout = $layouts[0] #Choose first layout
$web = $site.OpenWeb(); #Site.Rootweb
$pweb = [Microsoft.SharePoint.Publishing.PublishingWeb]::GetPublishingWeb($web) #Get the Publishing Web Page
$pages = $pweb.GetPublishingPages($pweb) #Get collection of Pages from webpage
## Get files in exported folder and parse them
$files=get-childitem ./testEx -rec|where-object {!($_.psiscontainer)} #Get all files recursively
foreach ($file in $files) { #Foreach file in folder(s)
$name=$file.Name.Substring(0, $file.Name.IndexOf(".htm")) #Get file name, stipped of extension
#Prep Destination url for new pages
$strDestURL=$subSite+$pageFolder+"/"+$name+".aspx"
#End Prep
$page = $pages.Add($strDestURL, $layout) #Add a new page to the collection with Wiki layout
$item = $page.ListItem #Get list item of the page to access fields
$item["Title"] = $name; #Set Title to file name
$cont = Get-Content ./testEx/$file #Read contents of the file
[string]$cont1 = "" #String to hold contents after parsing
### HTML PARSING
foreach($line in $cont){ #Get each line in the contents
#############################################
# Replacing Doku URI with Sharepoint URI #
#############################################
## Matching for hyperlinks and img src
$mat = $mod -match ".*href=`"/Media.*`"" #Find href since href and img src have same URI Structure
if($mat){ #If a match is found
foreach($i in $matches){ #Cycle through all matches
[string]$j = $i[0] #Set match to a string
$j = $j.Substring($j.IndexOf($fileFolder)) #Shorten string for easier parsing
$j = $j.Substring(0, $j.IndexOf("amp;")+4) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf("id="))+$j.Substring($j.IndexOf("&")+5)) #Replace all occurances of original with two sections
}
}
## Matching for files and images
$mat = $mod -match "`"/Media.*`"" #Find all Media resources
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
$j = $j.Substring(0, $j.IndexOf("class")) #Shorten string for easier parsing
$j = $j.Substring($j.IndexOf($fileFolder)) #Sorten
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Remove end for easier parsing
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($fileFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
$mod = $mod.Replace("/"+$fileFolder, $subSite+"/"+$fileFolder+"/") #make sure URI contains base site
## Matching for links
$mat = $mod -match "`"/Page.*`"" #Find all Page files
if($mat){ #If match is found
[string]$j = $matches[0] #Set match to a string
if($j -match "start"){ #If it is a start page,
$k = $j.Replace(":", "-") #replace : with a - instead of removing to keep track of all start pages
$k = $k.Replace("/"+$pageFolder+"/", "/"+$pageFolder) #Remove following / from Pages
$mod = $mod.Replace($j, $k) #Replace old string with the remade one
}
else{ #If it is not a start page
$j = $j.Substring(0, $j.IndexOf("class")) #Stop the string at the end of the href so not to snag any other :
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf(":")+1) #Now limit down to last :
$mod = $mod.Replace($j, $j.Substring(0, $j.IndexOf($pageFolder)+5)+$j.Substring($j.LastIndexOf(":")+1)) #Replace all occurances of original with two sections
}
[string]$j = $mod #Set the match to a new string
$j = $j.Substring(0, $j.IndexOf("class")) #Stop at class to limit extra "
$j = $j.Substring($j.IndexOf($pageFolder)) #Start at Pages in URI
$j = $j.Substring(0, $j.LastIndexOf("`"")) #Grab ending "
$rep = $j+".aspx" #Add .aspx to the end of URI
$mod = $mod.Replace($j, $rep) #Replaced old URI with new one
}
$mod = $mod.Replace("/"+$pageFolder, $subSite+"/"+$pageFolder+"/") #Add base site to URI
$cont1 += $mod #Add parsed line to the new HTML string
}
##### END HTML PARSING
$item["Page Content"] = $cont1 #Set Wiki page's content to new HTML
$item.Update() #Update the page to set the new fields
}
$site.Dispose() #Dispose of the open link to site
$web.Dispose() #Dispose of the open link to the webpage

Resources