polarrcnn-thesis/main333.tex

452 lines
28 KiB
TeX
Raw Normal View History

2024-08-07 12:35:26 +08:00
\documentclass[lettersize,journal]{IEEEtran}
\usepackage{amsmath,amsfonts}
\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{array}
% \usepackage[caption=false,font=normalsize,labelfont=sf,textfont=sf]{subfig}
\usepackage{textcomp}
\usepackage{stfloats}
\usepackage{url}
\usepackage{verbatim}
\usepackage{graphicx}
\usepackage{cite}
\usepackage{subcaption}
\usepackage{graphicx}
% \usepackage{subfigure}
\usepackage[T1]{fontenc}
\hyphenation{op-tical net-works semi-conduc-tor IEEE-Xplore}
% updated with editorial comments 8/9/2021
\begin{document}
\title{PlorRCNN:\@ Fewer Anchors for Lane Deteciton}
\author{IEEE Publication Technology,~\IEEEmembership{Staff,~IEEE,}
% <-this % stops a space
\thanks{This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.}% <-this % stops a space
\thanks{Manuscript received April 19, 2021; revised August 16, 2021.}}
% The paper headers
\markboth{Journal of \LaTeX\ Class Files,~Vol.~14, No.~8, August~2021}%
{Shell \MakeLowercase{\textit{et al.}}: A Sample Article Using IEEEtran.cls for IEEE Journals}
% \IEEEpubid{0000--0000/00\$00.00~\copyright~2021 IEEE}
% Remember, if you use this you must call \IEEEpubidadjcol in the second
% column for its text to clear the IEEEpubid mark.
\maketitle
\begin{abstract}
Lane detection is a critical and challenging task in autonomous driving, particularly in real-world scenarios where traffic lanes are often slender, lengthy, and partially obscured by other vehicles, complicating detection efforts. Existing anchor-based methods typically rely on prior line anchors or grid anchors to extract features and regress lane location and shape. However, manually setting these prior anchors based on lane distribution is cumbersome, and ensuring sufficient anchor coverage across diverse datasets requires a large number of anchors. In this study, we introduce PlorRCNN, a two-stage anchor-based method for lane detection. Our approach effectively reduces the number of lane anchors without sacrificing performance, yielding competitive results on three prominent 2D lane detection benchmarks (Tusimple, CULane, and LLAMAS) while maintaining a lightweight model size.
\end{abstract}
\begin{IEEEkeywords}
Lane detection
\end{IEEEkeywords}
\section{Introduction}
\IEEEPARstart{L}{ane} detection is a significant problem in computer vision and autonomous driving, forming the basis for accurately perceiving the driving environment in intelligent driving systems. While extensive research has been conducted in ideal environments, it remains a challenging task in adverse scenarios such as night driving, glare, crowd, and rainy conditions, where lanes may be occluded or damaged. Moreover, the slender shapes and complex topologies of lanes add to the complexity of detection challenges. An effective lane detection method should take into account both high-level semantic features and low-level features to address these varied conditions and ensure robust performance with a fast speed in real-time applications such as autonomous driving.
Traditional methods predominantly concentrate on handcrafted local feature extraction and lane shape modeling. Techniques such as the Canny edge detector\cite{canny1986computational}, Hough transform\cite{houghtransform}, and deformable templates for lane fitting\cite{kluge1995deformable} have been extensively utilized. Nevertheless, these approaches often encounter limitations in practical settings, particularly when low-level and local features lack clarity or distinctiveness.
In recent years, fueled by advancements in deep learning and the availability of large datasets, significant strides have been made in lane detection. Deep models, including convolutional neural networks (CNNs) and transformer-based architectures, have propelled progress in this domain. Previous approaches often treated lane detection as a segmentation task, albeit with simplicity came time-intensive computations. Some methods relied on parameter-based models, directly outputting lane curve parameters instead of pixel locations. These models offer end-to-end solutions, but the curve parameter sensitivity to lane shape compromises robustness.
\begin{figure}[t]
\centering
2024-08-08 15:37:08 +08:00
\begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
\centering
2024-08-08 15:37:08 +08:00
\includegraphics[width=0.9\linewidth]{lanefig/anchor_demo/anchor_fix_init.jpg}
2024-08-07 12:35:26 +08:00
\caption{}
\end{subfigure}
2024-08-08 15:37:08 +08:00
\begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
\centering
2024-08-08 15:37:08 +08:00
\includegraphics[width=0.9\linewidth]{lanefig/anchor_demo/anchor_fix_learned.jpg}
2024-08-07 12:35:26 +08:00
\caption{}
\end{subfigure}
%\qquad
%让图片换行,
2024-08-08 15:37:08 +08:00
\begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
\centering
2024-08-08 15:37:08 +08:00
\includegraphics[width=0.9\linewidth]{lanefig/anchor_demo/anchor_proposal.jpg}
2024-08-07 12:35:26 +08:00
\caption{}
\end{subfigure}
2024-08-08 15:37:08 +08:00
\begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
\centering
2024-08-08 15:37:08 +08:00
\includegraphics[width=0.9\linewidth]{lanefig/anchor_demo/gt.jpg}
2024-08-07 12:35:26 +08:00
\caption{}
\end{subfigure}
\caption{Compare with the anchor setting with other methods. (a) The initial anchor settings of CLRNet. (b) The learned anchor settings of CLRNet trained on CULane. (c) The proposed anchors of our method. (d) The ground truth.}
\label{anchor setting}
\end{figure}
Drawing inspiration from object detection methods, several anchor-based approaches have been introduced for lane detection, such as row anchors and straight lane anchors. These methods have demonstrated superior performance by leveraging anchor priors and enabling larger receptive fields for feature extraction. However, anchor-based methods encounter challenges, with the main issue lying in the configuration of anchor priors, specifically in determining the optimal number and locations of anchors. It is imperative that the anchor number is sufficiently large to cover all potential lane locations, yet this may lead to increased model complexity and the introduction of numerous background (negative) anchors. Some studies utilize multiple row and column anchors, though label assignment remains a manually crafted process based on angle considerations. Alternatively, other approaches employ region proposal techniques, generating flexible and high-quality anchors for each image through the use of theta maps and starting points maps, rather than fixed anchors. Though this method offers adaptability, its performance may lag behind anchor-fixed (one-stage) approaches, suffering from the feature training of multitask loss funtions. This phenomenon also appear in our method, which we will discuss later.
In this paper, to address the some problems of anchor-based, we proposed PlorRCNN, a two-stage model based on local and global plor coordinate system. As shown in Figure \ref{anchor setting}, different from previous work\cite{} with a large amount of predefined anchors, our method propose a fewer anchors with high quality than other fixed anchor-based method. The proposal module is position aware and most of the propoal anchors are around the ground truth, so as to provide a strong base for the second stage to regress the lane more accurate.
% We also introduce plor refinement to refine the anchor shape by segmentation. The architectures of our baseline is simple, only use the CNN and MLP layers, without any complicated block such as self attention or
Our main contribution are summarized as follows:
\begin{itemize}
\item We simplified the anchor parameters with local and global plor system.
\item We proposed local plor module to generate a set of anchors for each image with high quality and more fiexibility.
\item Our proposed method can get competitive performance with other advanded methods. With more than 80\% f1 -score on CULane with a light weight backbone with ResNet-18
\end{itemize}
\section{Related Works}
The lane detection aims to detect lane instances in a image. In this section, we only introduce deep-leanrning based methods for lane detection. The lane detection methods can be categorized by segmentation based parameter-based methods and anchor-based methods.
\textbf{Segmentation-based Methods.} Segmentation-based methods focus on pixel-wise prediction. They predefined each pixel into different categories according to different lane instances and background\cite{} and predicted information pixel by pixel. However, these methods overly focus on low-level and local features, neglecting global semantic information and real-time detection. SCNN uses a larger receptive field to overcome this problem. Some methods such as UFLDv1 and v2\cite{}\cite{} and CondLaneNet\cite{} utilize row-wise or column-wise classification instead of pixel classification to improve detection speed. Another issue with these methods is that the lane instance prior is learned by the model itself, leading to a lack of prior knowledge. Lanenet uses post-clustering to distinguish each lane instance. UFLD divides lane instances by angles and locations and can only detect a fixed number of lanes. CondLaneNet utilizes different conditional dynamic kernels to predict different lane instances. Some methods such as FOLOLane\cite{} and GANet\cite{} use bottom-up strategies to detect a few key points and model their global relations to form lane instances.
\textbf{Parameter-based Methods.} Instead of predicting a series of points locations or pixel classes, parameter-based methods directly generate the curve parameters of lane instances. PolyLanenet\cite{} and LSTR\cite{} consider the lane instance as a polynomial curve and output the polynomial coefficients directly. BézierLaneNet\cite{} treats the lane instance as a Bézier curve and generates the locations of control points of the curve. BSLane uses B-Spline to describe the lane, and the curve parameters focus on the local shapes of lanes. Parameter-based methods are mostly end-to-end without postprocessing, which grants them faster speed. However, since the final visual lane shapes are sensitive to the lane shape, the robustness and generalization of parameter-based methods may be less than ideal.
\textbf{Anchor-Based Methods.} Inspired by some methods in general object detection like YOLO \cite{} and DETR \cite{}, anchor-based methods have been proposed for lane detection. Line-CNN is the earliest work, to our knowledge, that utilizes line anchors to detect lanes. The lines are designed as rays emitted from the three edges (left, bottom, and right) of an image. However, the receptive field of the model only focuses on edges and is slower than some methods. LaneATT \cite{} employs anchor-based feature pooling to aggregate features along the whole line anchor, achieving faster speed with better performance. Nevertheless, the grid sampling strategy and label assignment limit its potential. CLRNet \cite{} utilizes cross-layer refinement strategies, SimOTA label assignment \cite{}, and Liou loss to enhance anchor-based performance beyond most methods. The main advantage of anchor-based methods is that many strategies from anchor-based general object detection can be easily applied to lane detection, such as label assignment, bounding box refinement, and GIOU loss, etc. However, the disadvantages of existing anchor-based lane detection are also evident. The line anchors need to be handcrafted, and the number of anchors is large, resulting in high computational consumption. Motivated by this, ADNet \cite{} uses theta map and start point map to propose more flexible anchors, but its performance is lower than that of CLRNet, which employs a set of handcrafted predefined anchors.
To address the issues present in anchor-based methods, we have developed a novel anchor proposal module designed to achieve higher performance with fewer anchors.
\section{Method}
\subsection{Overall architecture}
To reduce the number of anchors, we design a two-stage network for lane detection similar to Faster R-CNN \cite{}. Figure \ref{} illustrates the overall pipeline of our model. The backbone extracts the image features, the local polar serves as the first stage to propose line anchors, and the RCNN block serves as the second stage to aggregate the line features along the line anchors to predict the lane instance. We will introduce these blocks in detail in the following subsections.
\subsection{Lane and Line Anchor Representation}
Lanes are thin and long curves, a suitable lane prior helps the model to extract features and predict location and modeling the shapes of lane curves more accurately. Like privious works\cite{}\cite{}, the lane prior in our work are straight lines and we sample a sequense of 2D points on each line anchor, i.e. $ P\doteq \left\{ \left( x_1, y_1 \right) , \left( x_2, y_2 \right) , ....,\left( x_n, y_n \right) \right\} $, where N is the number of sampled points, The y coordinate of points is uniform sampled from the image vertically, i.e. $y_i=\frac{H}{N-1}*i$, where H is the image height. The same y coordinate of points are also sampled from the groundtruth lane and the model regress the x coordinate offset from line anchor to lane instance ground truth.
\textbf{Plor Coordinate system.} Since the line anchors are always straight in our method, we use straight line parameters to describe a line anchor. Previous work uses a ray to describe a line anchor, where the parameters of a ray contain the start point's coordinates and its orientation/angle, i.e., $\left\{\theta, P_{xy}\right\}$, as shown in Figure \ref{coord} (a). Using a ray may cause ambiguity in describing a line because a line may have infinite start points. As illustrated in Figure \ref{coord} (a), the yellow and darkgreen start points with the same orientation $\theta$ describe the same line. This ambiguity arises because a straight line has two degrees of freedom while a ray has three degrees of freedom. Motivated by this, as shown in Figure \ref{coord} (b), we use polar coordinate systems to describe a line anchor with two parameters for radius and angle $\left\{\theta, r\right\}$, where $\theta \in \left[-\frac{\pi}{2}, \frac{\pi}{2}\right]$ and $r \in \left(-\infty, +\infty\right)$.
% \begin{figure}[t]
% \centering
% \begin{subfigure}{1\linewidth}
% \centering
% \includegraphics[width=0.2\linewidth]{lanefig/coord/ray.png}
% \caption{}
% \end{subfigure}
% \begin{subfigure}{1\linewidth}
% \centering
% \includegraphics[width=0.2\linewidth]{lanefig/coord/plor.png}
% \caption{}
% \end{subfigure}
% \caption{Different descriptions for anchor parameters. (a) Ray: start point and oritation. (b) Plor: radius and angle.}
% \label{coord}
% \end{figure}
% \begin{figure}[t]
% \centering
2024-08-08 15:37:08 +08:00
% \begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
% \centering
% \includegraphics[width=1\linewidth]{lanefig/coord/ray.png}
% \caption{}
% \end{subfigure}
% \hfill
2024-08-08 15:37:08 +08:00
% \begin{subfigure}{0.49\linewidth}
2024-08-07 12:35:26 +08:00
% \centering
% \includegraphics[width=1\linewidth]{lanefig/coord/plor.png}
% \caption{}
% \end{subfigure}
% \caption{Different descriptions for anchor parameters. (a) Ray: start point and orientation. (b) Plor: radius and angle.}
% \label{coord}
% \end{figure}
The polar coordinate should have an origin point. We define two kinds of polar coordinate systems called the global coordinate system and the local coordinate system, with the origin points denoted as the global origin point $P_{0}^{\text{global}}$ and the local origin point $P_{0}^{\text{local}}$, respectively. For convenience, the global origin point is set around the static vanishing point of the lane image dataset, while the local origin points can be set anywhere within the image. From Figure \ref{coord}, it is easy to see that both the global and local coordinate systems share the same angle parameter $\theta$ for the same line anchor, with only the radius being different.
\subsection{Local Plor Proposal Module}
Just like the region proposal network in Faster RCNN \cite{}, the local polar proposal module aims to propose flexible anchors with high-quality in an image. The backbone receives an image $I \in \mathbb{R}^{3 \times H \times W}$ and outputs the feature map $F \in \mathbb{R}^{C_{f} \times H_{f} \times W_{f}}$. We set each of the $H_{f} \times W_{f}$ map grids as local origin points for different local polar systems. The output of the local polar proposal consists of two branches. The first branch is called the polar coordinate branch, and it predicts anchor parameters under the corresponding local polar coordinate, i.e., $\left[\mathbf{\Theta}^{H_{f} \times W_{f}}, \mathbf{\xi}^{H_{f}\times W_{f}}\right]$, which denotes the angle and radius, respectively. The second branch is called the polar confidence branch, which predicts the confidence of each proposed anchor in the first stage, i.e., $\delta^{H_{f} \times W_{f} }$. To keep the model lightweight, the local polar proposal module (LPM) is composed of several convolutional layers.
During the training stage, the ground truth of proposal parameters is constructed as follows. The absolute value of radius ground truth is defined as the shortest distance from a grid point (local plot origin point) to the lane curve. The ground truth of angle is defined as the orientation of the link from the grid point to the shortest points on the curve. Only one grid with radius less than a threshold $\tau$ is set as a positive sample, while others are set as negative samples. Figure \ref{LPM} illustrates the label construction process for LPM. Therefore, the LPM training loss function is as follows:
\begin{equation}
\begin{aligned}
\mathcal{L} _{LPM}&=BCE\left( \delta , \delta _{gt} \right) \\
&+\sum_i^{N_{pos}}{\left( d\left( \theta _{pos,i}-\theta _{gt,i} \right) +d\left( r_{pos,i}-r_{gt,i} \right) \right)}
\end{aligned}
\label{loss_lpm}
\end{equation}
where $BCE\left( \cdot , \cdot \right) $ denotes the binary cross entropy loss and $d\left(\cdot \right)$ denotes the smooth-l1 loss. In order to keep the backbone training stability, the gradiants from the confidential branch to the backbone feature map are detached.
% \begin{figure}[t]
% \centering
% \includegraphics[width=0.7\linewidth]{lanefig/coord/localplor.png}
% \caption{Label construction for local plor proposal module.}
% \label{LPM}
% \end{figure}
During the test stage, once the local plor parameter of a line anchor is provided, it can be transformed to the global plor coordinates with the following euqation:
\begin{equation}
\begin{aligned}
2024-08-14 06:58:53 +08:00
r^{G}=r^{L}+\left( x^{L}-x^{G} \right) \cos \theta
\\+\left( y^{L}-y^{G} \right) \sin \theta
2024-08-07 12:35:26 +08:00
\end{aligned}
\end{equation}
2024-08-14 06:58:53 +08:00
where $\left( x^{L}, y^{L} \right)$ and $\left( x^{G}, y^{G} \right)$ are the Cartesian coordinates of local and global origin points correspondingly.
2024-08-07 12:35:26 +08:00
\subsection{RCNN Module}
The second stage is RCNN Module, which accept the line pooling features as input and predict the accurate lane shape and localtion. Once a proposal anchor global plor parameters $\left\{ \theta , r \right\} $ are provided, the feature points can be sample on the line anchor. The y coordinate of points is uniform sampled from the image vertically as mentioned before, and the $x_{i}$ is caculated by the following equation:
\begin{equation}
\begin{aligned}
x_{i\,\,}=-y_i\tan \theta +\frac{r}{\cos \theta}
\end{aligned}
\end{equation}
The RCNN Module consists of several MLP layers and predicts the confidence and the coordinate offset of $x_{i}$. During the training stage, all the $F\in \mathbb{R} ^{C_{f}\times H_{f}\times W_{f}}$ proposed anchors participate, and the SimOTA\ref{} label assignment strategy is used for the RCNN module to determine which anchors are positive anchors, irrespective of the confidence predicted by the LPM module. These strategies are employed because the negative/background anchors are also crucial for the adaptability of the RCNN module.
The loss function is as follows:
\begin{equation}
\begin{aligned}
\mathcal{L} _{RCNN}=c_{cls}\mathcal{L} _{cls}+c_{loc}\mathcal{L} _{loc}
\end{aligned}
\end{equation}
where $\mathcal{L} _{cls}$ is focal loss, and $\mathcal{L} _{loc}$ is LaneIou loss\cite{}.
In the testing stage, anchors with the top-$k_{l}$ confidence are the chosed as the proposal anchors, and $k_{l}$ anchors are fed into the RCNN module to get the final predictions.
\section{Experiment}
\begin{table*}[h]
\centering
\caption{CULane Result compared with other methods}
\begin{tabular}{cccccccccccc}
\hline
\textbf{Method}& \textbf{Backbone}& \textbf{F1@50}$\uparrow$ & \textbf{Normal}$\uparrow$&\textbf{Crowded}$\uparrow$&\textbf{Dazzle}$\uparrow$&\textbf{Shadow}$\uparrow$&\textbf{No line}$\uparrow$& \textbf{Arrow}$\uparrow$& \textbf{Curve}$\uparrow$& \textbf{Cross}$\downarrow$ & \textbf{Night}$\uparrow$ \\
\hline
\textbf{Segmentation Based} \\
\cline{1-1}
SCNN &VGG-16&71.60&90.60&69.70&58.50&66.90&43.40&84.10&64.40&1900&66.10 \\
RESA &ResNet50&75.3&92.10&73.10&69.20&72.80&47.70&83.30&70.30&1503&69.90 \\
LaneAF &DLA34&77.41&91.80&75.61&71.78&79.12&51.38&86.88&72.70&1360&73.03 \\
\cline{1-1}
\textbf{Parameter Based} \\
\cline{1-1}
% LSTR &ResNet18&64.00&&&&&&& \\
BezierLanenet &ResNet18&73.67&90.22&71.55&62.49&70.91&45.30&84.09&58.98&996&68.70\\
BSNet &ResNet34&79.89&93.75&78.01&76.65&79.55&54.69&90.72&73.99& 1455&75.28\\
% Eigenlanes &ResNet50&77.20&&&&&&&&&&&& \\
% Laneformer &ResNet50&77.06&&&&&&&&&&& \\
\cline{1-1}
\textbf{Anchor Based} \\
\cline{1-1}
LaneATT &ResNet122&77.02&91.74&76.16&69.47&76.31&50.46&86.29&64.05&1264&70.81 \\
ADNet &ResNet34&78.94&92.90&77.45&71.71&79.11&52.89&89.90&70.64&1499&74.78 \\
% CLRNet &ResNet34&79.73&&&&&&&& \\
CLRNet &ResNet101&80.13&93.85&78.78&72.49&82.33&54.50&89.79&75.57&1262&75.51 \\
CLRNet &DLA34&80.47&93.73&79.59&75.30&82.51&54.58&90.62&74.13&1155&75.37 \\
\hline
PlorRCNN (ours) &ResNet18&80.81&94.11&79.62&75.65&82.43&54.41&90.49&77.02&975&75.59\\
PlorRCNN-NMS-free (ours) &ResNet18&80.37&93.81&79.07&74.73&81.40&53.73&89.91&75.64&\textbf{941}&75.41\\
PlorRCNN (ours) &ResNet34&80.95&\textbf{94.33}&79.68&\textbf{75.87}&82.89&55.69&\textbf{90.90}&78.40&1182&75.84\\
PlorRCNN (ours) &ResNet50&\textbf{81.23}&94.32&\textbf{80.22}&75.04&\textbf{83.40}&\textbf{56.25}&90.41&\textbf{78.94}&1271&\textbf{76.16}\\
\hline
\end{tabular}
\label{tab:my_table}
\end{table*}
\begin{table}[h]
\centering
\begin{tabular}{cccccc}
\hline
\textbf{Fix anchor}& \textbf{Plor angle}& \textbf{Plor r}&\textbf{Auxloss}&\textbf{F1@50}&\textbf{F1@75} \\
\hline
\checkmark&&&\checkmark&80.29&62.05\\
&&\checkmark&\checkmark&54.48&25.28\\
&\checkmark&&\checkmark&80.30&62.67\\
&\checkmark&\checkmark&&80.51&63.09\\
&\checkmark&\checkmark&\checkmark&\textbf{80.81}&\textbf{63.64}\\
\hline
\end{tabular}
\caption{CULane Result compared with other methods}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{ccccc}
\hline
\textbf{Local Plor Size}& \textbf{Top-60}& \textbf{Top-40}&\textbf{Top-20}&\textbf{Top-10} \\
\hline
$2\times10$&/&/&80.54&80.50\\
$4\times10$&/&80.81&80.81&80.39\\
$5\times12$&80.86&80.86&80.82&79.68\\
\hline
\end{tabular}
\caption{CULane Result compared with other methods}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{cccccc}
\hline
\textbf{Method}& \textbf{Backbone}& \textbf{F1(\%)}&\textbf{Acc(\%)}&\textbf{FP(\%)}&\textbf{FN(\%)} \\
\hline
SCNN&VGG16&95.97&96.53&6.17&1.80\\
PolyLanenet&EfficientNetB0&90.62&93.36&9.42&9.33\\
UFLD&ResNet18&87.87&95.82&19.05&3.92\\
UFLD&ResNet34&88.02&95.86&18.91&3.75\\
LaneATT&ResNet34&96.77&95.63&3.53&2.92\\
LaneATT&ResNet122&96.06&96.10&5.64&2.17\\
FOLOLane&ERFNet&96.59&96.92&4.47&2.28\\
CondLaneNet&ResNet101&97.24&96.54&2.01&3.50\\
CLRNet&ResNet18&97.89&96.84&2.28&1.92\\
\hline
PlorRCNN(ours)&ResNet18&\textbf{98.00}&96.00&\textbf{1.75}&2.25\\
PlorRCNN-NMS-free(ours)&ResNet18&97.65&96.02&2.52&2.15\\
\hline
\end{tabular}
\caption{CULane Result compared with other methods}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{cccc}
\hline
\textbf{Attention}& \textbf{Id Embeddings}& \textbf{Plor Embeddings}&\textbf{F1@50}\\
\hline
&&&69.12\\
\checkmark&&&75.55\\
\checkmark&\checkmark&&78.30\\
\checkmark&&\checkmark&76.14\\
\checkmark&\checkmark&\checkmark&80.37\\
\hline
\end{tabular}
\caption{CULane Result compared with other methods}
\end{table}
% \section{References Section}
% You can use a bibliography generated by BibTeX as a .bbl file.
% BibTeX documentation can be easily obtained at:
% http://mirror.ctan.org/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
% argument is your BibTeX string definitions and bibliography database(s)
%\bibliography{IEEEabrv,../bib/paper}
%
% \section{Simple References}
% You can manually copy in the resultant .bbl file and set second argument of $\backslash${\tt{begin}} to the number of references
% (used to reserve space for the reference number labels box).
\bibliographystyle{IEEEtran}
\bibliography{ref}
% \begin{thebibliography}{1}
% \bibliographystyle{IEEEtran}
% \bibitem{ref1}
% {\it{Mathematics Into Type}}. American Mathematical Society. [Online]. Available: https://www.ams.org/arc/styleguide/mit-2.pdf
% \bibitem{ref2}
% T. W. Chaundy, P. R. Barrett and C. Batey, {\it{The Printing of Mathematics}}. London, U.K., Oxford Univ. Press, 1954.
% \bibitem{ref3}
% F. Mittelbach and M. Goossens, {\it{The \LaTeX Companion}}, 2nd ed. Boston, MA, USA: Pearson, 2004.
% \bibitem{ref4}
% G. Gr\"atzer, {\it{More Math Into LaTeX}}, New York, NY, USA: Springer, 2007.
% \bibitem{ref5}M. Letourneau and J. W. Sharp, {\it{AMS-StyleGuide-online.pdf,}} American Mathematical Society, Providence, RI, USA, [Online]. Available: http://www.ams.org/arc/styleguide/index.html
% \bibitem{ref6}
% H. Sira-Ramirez, ``On the sliding mode control of nonlinear systems,'' \textit{Syst. Control Lett.}, vol. 19, pp. 303--312, 1992.
% \bibitem{ref7}
% A. Levant, ``Exact differentiation of signals with unbounded higher derivatives,'' in \textit{Proc. 45th IEEE Conf. Decis.
% Control}, San Diego, CA, USA, 2006, pp. 5585--5590. DOI: 10.1109/CDC.2006.377165.
% \bibitem{ref8}
% M. Fliess, C. Join, and H. Sira-Ramirez, ``Non-linear estimation is easy,'' \textit{Int. J. Model., Ident. Control}, vol. 4, no. 1, pp. 12--27, 2008.
% \bibitem{ref9}
% R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez, ``Stabilization of food-chain systems using a port-controlled Hamiltonian description,'' in \textit{Proc. Amer. Control Conf.}, Chicago, IL, USA,
% 2000, pp. 2245--2249.
% \end{thebibliography}
\newpage
\section{Biography Section}
If you have an EPS/PDF photo (graphicx package needed), extra braces are
needed around the contents of the optional argument to biography to prevent
the LaTeX parser from getting confused when it sees the complicated
$\backslash${\tt{includegraphics}} command within an optional argument. (You can create
your own custom macro containing the $\backslash${\tt{includegraphics}} command to make things
simpler here.)
\vspace{11pt}
\bf{If you include a photo:}\vspace{-33pt}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{fig1}}]{Michael Shell}
Use $\backslash${\tt{begin\{IEEEbiography\}}} and then for the 1st argument use $\backslash${\tt{includegraphics}} to declare and link the author photo.
Use the author name as the 3rd argument followed by the biography text.
\end{IEEEbiography}
\vspace{11pt}
\bf{If you will not include a photo:}\vspace{-33pt}
\begin{IEEEbiographynophoto}{John Doe}
Use $\backslash${\tt{begin\{IEEEbiographynophoto\}}} and the author name as the argument followed by the biography text.
\end{IEEEbiographynophoto}
\vfill
\end{document}