LaTeX: Multiple authors in a two-column article - layout

I'm kind of new to LaTeX and I am having a bit of a problem..
I am using a twocolumn layout for my article. There are four authors involved with different affiliations, and I am trying to list all of them under the title so they span the entire width of the page (all on the same level). It should be similar to this:
Article Title
auth1FN auth1LN 2 ... 3 auth4FN auth4LN
department ... department
school ... school
email#edu ... email#edu
Abstract .....................
.................... .....................
.................... .....................
.................... .....................
Currently I have something along the lines:
\documentclass[10pt,twocolumn]{article}
\usepackage{multicol}
\begin{document}
\begin{multicols}{2}
\title{Article Title}
\author{
First Last\\
Department\\
school\\
email#edu
\and
First Last\\
...
}
\date{}
\maketitle
\end{multicols}
\begin{abstract}
...
\end{abstract}
\section{Introduction}
...
\end{document}
The problem is that the authors are not displayed all on the same level, instead I get the first three next to each other, followed by the last one underneath.
Is there way to achieve what I want? Also if possible, how can I customize the font of the affiliations (to be smaller and in italic)?

I put together a little test here:
\documentclass[10pt,twocolumn]{article}
\title{Article Title}
\author{
First Author\\
Department\\
school\\
email#edu
\and
Second Author\\
Department\\
school\\
email#edu
\and
Third Author\\
Department\\
school\\
email#edu
\and
Fourth Author\\
Department\\
school\\
email#edu
}
\date{\today}
\begin{document}
\maketitle
\begin{abstract}
\ldots
\end{abstract}
\section{Introduction}
\ldots
\end{document}
Things to note, the title, author and date fields are declared before \begin{document}. Also, the multicol package is likely unnecessary in this case since you have declared twocolumn in the document class.
This example puts all four authors on the same line, but if your authors have longer names, departments or emails, this might cause it to flow over onto another line. You might be able to change the font sizes around a little bit to make things fit. This could be done by doing something like {\small First Author}. Here's a more detailed article on \LaTeX font sizes:
https://engineering.purdue.edu/ECN/Support/KB/Docs/LaTeXChangingTheFont
To italicize you can use {\it First Name} or \textit{First Name}.
Be careful though, if the document is meant for publication often times journals or conference proceedings have their own formatting guidelines so font size trickery might not be allowed.

What about using a tabular inside \author{}, just like in IEEE macros:
\documentclass{article}
\begin{document}
\title{Hello, World}
\author{
\begin{tabular}[t]{c#{\extracolsep{8em}}c}
I. M. Author & M. Y. Coauthor \\
My Department & Coauthor Department \\
My Institute & Coauthor Institute \\
email, address & email, address
\end{tabular}
}
\maketitle
\end{document}
This will produce two columns authors with any documentclass.
Results:

Related

Restart the numbering of the reference labels in the appendix body in overleaf

I have created a supplementary section in my journal manuscript using the following script:
\appendix
%%%
\renewcommand{\appendixname}{S}
\renewcommand{\thesection}{S}
\renewcommand\thefigure{\thesection.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand*{\thepage}{S\arabic{page}}
\setcounter{page}{1}
%%%
\begin{center}
\section*{Supplementary Material}
\end{center}
%%%
\subsection{Sub-heading1}
A separate bibliography also has been generated for the appendix using the multibib package as follows:
\usepackage[resetlabels]{multibib}
\newcites{supp}{Supplementary References}
and declaring
%% Loading supplementary bibliography style file
\bibliographystylesupp{unsrt}
% Loading supplementary bibliography database
\bibliographysupp{cas-sc-template-refs.bib}
\end{document}
resulting in a reference section that looks like this:
Supplmentary References
However, the reference labels in the body text does not change:
S.2. Discussion
S.2.1. Subheading2
The role of the structural of squares and the circles is clearly seen in the
interdependence of property on the values of energy and density as
shown in Figures S.4a and S.4b. There is a clear clustering of data
points based on the primary property as viewed against its dependence
on secondary property in Figures S.4c. The high-value compositions
are observed to be all apples and the medium value ones are observed
to be oranges. The values thus predicted placed most of them in the
low– and medium–value range [66].
The reference numbers are still from the main document's bibliography.
I have tried the \DeclareOption{resetlabels}{\continuouslabelsfalse} option in the multibib package documentation given in http://tug.ctan.org/tex-archive/macros/latex/contrib/multibib/multibib.pdf but to no avail.
Is there any way to renumber these reference labels as well?

Trying to bind a complex file name in a DBI insert place holder, but special characters get mangled

Guys I don't know if this is a bind problem, character set problem or something I haven't considered. Here's the challenge - the file system was written by some cd ripping software but the problem is more generic, I need to be able to insert any legal filename from a Linux OS into a database, this set of titles is just a test case.
I tied to make this as clear as I could:
find(\&process, $_);< populates #fnames with File::Find:name
my $count = scalar #fnames;
print "File count $count\n";
So I use File:Find to fill the array with the file name strings, some are really problematic..
$stm = $dbh_sms->prepare(qq{
INSERT INTO $table (path) VALUES (?)
});
foreach $fname(#fnames){
print "File:$fname\n";
$stm->execute("$fname");
}
Now here's what I see printed, compared to what comes back out of MariaDb, just a few examples showing the problem:
File:/test1/music/THE DOOBIE BROTHERS - BEST OF THE DOOBIES/02 - THE DOOBIE BROTHERS - LONG TRAIN RUNNIN´.mp3
A bunch of these titles have the back ticks, they are problem #1 - here's how they come back out of a select against the table after I populate it:
| /test1/music/THE DOOBIE BROTHERS - BEST OF THE DOOBIES/02 - THE DOOBIE BROTHERS - LONG TRAIN RUNNIN´.mp3
This character is also a problem:
File:/test1/music/Blue Öyster Cult - Workshop Of The Telescopes/01 - Blue Öyster Cult - Don't Fear The Reaper.mp3
From the database:
/test1/music/Blue Ãyster Cult - Workshop Of The Telescopes/01 - Blue Ãyster Cult - Don't Fear The Reaper.mp3
And one more, but far from the last of the problematic strings:
File:/test1/music/Better Than Ezra - Deluxe/03 - Better Than Ezra - Southern Gürl.mp3
Comes out as:
/test1/music/Better Than Ezra - Deluxe/03 - Better Than Ezra - Southern Gürl.mp3
I though this was a character set problem, so I added this to the connect string:
{ RaiseError => 1, mysql_enable_utf8mb4 => 1 }
I've also tried:
$dbh_sms->do('SET NAMES utf8mb4');
Is that a hex C2 being added to the string? (Edit: it's an octal 303, hex c3) I've also tried changing the column to varbinary and still see the same results. Anyone have a clue why this is happening? I'm at a loss....
TIA
Edit - I dumped the table with OD to find out what is actually getting inserted since I was of the belief that with a placeholder, a bind variable would be written without interpolation, basically a binary transfer. To save my eyes, I just did a handful of records concentrating on what I thought was a 'back tick' from the above example, which is an octal 264, AKA "Acute accent - spacing acute".
It is in the table, but preceded by 303 202 and 302, which are a couple of those 'A' characters with the icing on top and a "low quote" character in between. Which contradicts my prior understanding about the utility of place holders and bind variables.
So I am more confused now than before.
I found the problem, it was in the perl character encoding:
$fname = decode("UTF-8", $fname);
was all I needed.

How would I write a script to organize a list into a specific table-format?

I have a list of approximately 4,000-odd ancient Chinese proverbs I would like to import into Pleco (a Chinese dictionary app) for flashcards. However, Pleco needs them in a specific format (a table separated by tabs) and to do so manually would take forever.
Any idea how I would implement a script to automatically format the list?
e.g.
Ài fàn yǒu fàn; xī yī yǒu yī.
爱饭有饭, 惜衣有衣。
愛飯有飯, 惜衣有衣。
[Those who] treasure [their] food [will always] have food [and those who] take care of [their] clothing [will always] have clothes [to wear].
[An admonition to thrift; see also bùyī nuǎn below.]
CLOTHING FOOD THRIFT
[A2]
Ái gǒu yǎo de rén bù dōu shì zéi.
挨狗咬的人不都是贼。
挨狗咬的人不都是賊。
(lit) Not all who are bitten by dogs are thieves.
(fig) One should not make judgments based on superficial appearances. Things are not always as they (first) appear.
APPEARANCES JUDGMENTS
[A3]
Áiguo shé yǎo, jiàn shàn pǎo.
挨过蛇咬, 见鳝跑。
挨過蛇咬, 見鱔跑。
(lit) [One who has been] bitten by a snake [at the] sight [of an] eel [will] run away.
(fig) “Once bitten, twice shy.”
[See also yīzhāo bèi shé yǎo below.]
EXPERIENCE LEARNING
into:
爱饭有饭, 惜衣有衣 愛飯有飯, 惜衣有衣 [Those who] treasure [their] food [will always] have food [and those who] take care of [their] clothing [will always] have clothes [to wear].
[An admonition to thrift; see also bùyī nuǎn below.]
挨狗咬的人不都是贼。 挨狗咬的人不都是賊。 (lit) Not all who are bitten by dogs are thieves.
(fig) One should not make judgments based on superficial appearances. Things are not always as they (first) appear.
挨过蛇咬, 见鳝跑。 挨過蛇咬, 見鱔跑。 (lit) [One who has been] bitten by a snake [at the] sight [of an] eel [will] run away.
(fig) “Once bitten, twice shy.”
[See also yīzhāo bèi shé yǎo below.]
It needs to be in the form: Simplified characters TAB Traditional characters TAB TAB Definition
Please leave out the Pinyin (e.g. Ài fàn yǒu fàn; xī yī yǒu yī.), the identifier (e.g. [A1], [A2]) and the last line with the topics (e.g. CLOTHING FOOD THRIFT)
Further down the code the lines will not always be exact. Sometimes there will be more or less lines per proverb.
Thank you all so much! StackOverflow has been a huge help to me in my coding adventures.

Where is the algorithm or table that "resolves" the OLC short code?

Many geocodes, such as Geohash and OLC (Open Location Code), can be reduced by a context reference, as described here and here.
For example:
Being able to say WF8Q+WF, Cape Verde, Praia is significantly easier than remembering and using 796RWF8Q+WF
The resolver software take "Cape Verde, Praia" (or ISO abbreviation CV instead Cape Verde) and transforms it into a code prefix... The resolver make use of something like a lookup table,
Prefix | Country | Name (replaces prefix) | Reference is it?
-------+---------+------------------------+------------------
796R | CV | Praia | 796RWFMP ?
796R | CV | Joao Varela | 796RXC4C ?
797R | CV | Cruz do Gato | 797R3F38 ?
... | ... | ... | ...
I am supposing that the hidden (black box) algorithm do something simple, based on an official lookup table like the illustrated above. It use prefix of the lookup table to translate short code into complete code, or the inverse:
Translating short code to complete code. To retrieve the location from the OLC short code, just know the prefix. Example: "WF8Q+WF, CV, Praia " will use the line of CV | Praia of lookup table, that informs the prefix 796R to resolve the code, concatenating prefix with suffix, "796R" with "WF8Q+WF". It is like a functionrecoverNearest('WF8Q+WF', getReferencePoint_byText(lookup,"CV", "Praia")) but Google/PlusCodes not published lookup dataset of Cape Verde.
Translating complete code to short code. To show the short code from location (e.g. from 796RWF8Q+WF), is necessary to check the "nearst reference" to resolve the spatial query — Joao Varela and Praia lines have same prefix, but Praia's reference, by 796RWF, matches better. It is like a functionshorten('796RWF8Q+WF', getReferencePoint_byNearFrom(lookup,'796RWF8Q+WF')) but Google/PlusCodes not published lookup dataset of Cape Verde.
Question: where official lookup table of Cape Verde?
NOTES
We can split in more questions to generalize:
Is plus.codes really a black box? (perhaps I am using some wrong hypothesis on my explanation)
The lookup table of a country like Cape Verde exist, and we can download it? where Google is publishing it?
The official lookup table of Cape Verde exists and Google is respecting it... where Cape Verde govern is publishing it?
More illustrations for readers that not understand the central problem:
Translation from complete code to short code. Suppose the prefix 796R, when a complete code 796Rxxxx+yy is translated to "Praia xxxx+yy" and when is translated to "Joao Varela xxxx+yy"? It is an arbitrary choice if you not have a table with the PlusCode official references.
Translation from short code to complete code. Suppose that I am developing a Javascript software. The inputs are the short code xxxx+yy and a name (country and city or contry/state/city/district). Suppose only Cabo Verde country names, how to convert names into prefixes exactly as PlusCodes?
(edit after discussions) A preliminary conclusion. There are only two possible answers to the question:
show a link where PlusCodes published its name-to-prefix table;
show the source-code of an algorithm that reproduces exactly PlusCodes, that you developed by reengineering. I supposed that the most simple algorithm use the ordinary OLC encode/decode, and a parser for translate names into prefixes (or vice-versa), based in an "official lookup table".
Open Location Code is just another form of standard geographic coordinate: latitude and longitude. So, to get OLC for any place you need only geo coordinates for this place (see encoding section) and vice versa.
With database of Cape Verde towns and their coordinates you can build your own lookup table for quick OLC transformation with any required precision (starting from Wikipedia List of cities and towns in Cape Verde or using any of free world cities databases) or you can just convert OLC to latitude and longitude and than work with this coordinates.

Org-mode export to text: citations exported with HTML tags

I am trying to generate text files from org files(originally intended for latex>PDF outputs), for submissions that require plain text.
org-file:
#+TITLE: Foo
#+latex_class: article-no-defaults
#+OPTIONS: |:nil toc:nil author:nil
#+latex_class_options: [11pt,a4paper]
#+latex_header: \usepackage{fontspec}
#+latex_header: \setmainfont{Times New Roman}
#+latex_header: \usepackage{float}
#+latex_header: \usepackage{latexsym}
#+latex_header: \usepackage{graphicx}
#+latex_header: \usepackage{url}
#+latex_header: \usepackage{cleveref}
#+EXPORT_EXCLUDE_TAGS: noexport
#+DRAWERS: NOTES
\date{}
\maketitle
*Document Begins
text text
sentence 1 cite:Ohala1997. sentence 2 cite:Ohala1983, cite:Ham1998. Previous studies on sentence 3 cite:hankamer1988, cite:Ghosh2015, cite:banerjee2018. sentence 4 cite:recasens1997. sentence 5 cite:banerjee2018.
bibliography:file.bib
bibliographystyle:plain
I tried ‘apalike’ ‘plain’ and ‘natbib’ to check if the problem persisted with different bibliography styles. When I exported this file to text(c-c c-e t U/A) in unicode and ASCII, this was the output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Foo
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
\date{} \maketitle
sentence 1 [Ohala1997]. sentence 2 [Ohala1983], [Ham1998]. Previous studies on sentence 3 [hankamer1988], [Ghosh2015], [banerjee2018]. sentence 4 [recasens1997]. sentence 5 [banerjee2018].
Bibliography ============= [Ohala1997] John Ohala, Aerodynamics of
phonology, <i>{Proc. 4th Seoul International Conference on Linguistics
[SICOL]}</i>, <b>()</b>, 92--97 (1997). link. doi. [Ohala1983] John Ohala, PHONETIC
EXPLANATIONS FOR SOUND PATTERNS: IMPLICATIONS FOR GRAMMARS OF
COMPETENCE., <i>{Historical linguistics: Problems and perspectives}</i>,
<b>()</b>, 237--278 (1993). link. doi.
Here, the in-line references are rendered as intended, but the bibliography has HTML codes, and “\date{} \maketitle” are also being read as is(this is a problem I’ve seen in Git repos sometimes). Is there a way to generate a text file(templates, packages) with the bibliography section rendered correctly, and without at org-mode tags?
Thanks in advance.
Using pandoc together with pandoc-citeproc will get you pretty close. Pandoc doesn't understand the syntax used by org-ref, so you'd have to change the last two lines to
#+bibliography: file.bib
# OPTIONAL, uncomment if you'd like a different citation style,
# see https://citationstyles.org/.
# #+csl: <your-preferred-style>
Run pandoc by calling
pandoc YOURFILE.org --to=plain
in your terminal. Pandoc-citeproc is called automatically in the process. This will give you plain ASCII and will include a bibliography and references.

Resources