Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 8th International Conference on Mathematical Aspects of Computer and Information Sciences, MACIS 2019, held in Gebze, Turkey, in November 2019.

The 22 revised papers and 14 short papers presented were carefully reviewed and selected from 66 submissions. The papers are organized in the following topical sections: algorithms and foundation; security and cryptography; combinatorics, codes, designs and graphs; data modeling and machine learning; tools and software track.

Inhaltsverzeichnis

Frontmatter

Algorithms and Foundations

Frontmatter

Certified Hermite Matrices from Approximate Roots - Univariate Case

Let $$f_1, \ldots , f_m$$ be univariate polynomials with rational coefficients and $$\mathcal {I}:=\langle f_1, \ldots , f_m\rangle \subset {\mathbb Q}[x]$$ be the ideal they generate. Assume that we are given approximations $$\{z_1, \ldots , z_k\}\subset \mathbb {Q}[i]$$ for the common roots $$\{\xi _1, \ldots , \xi _k\}=V(\mathcal {I})\subseteq {\mathbb C}$$. In this study, we describe a symbolic-numeric algorithm to construct a rational matrix, called Hermite matrix, from the approximate roots $$\{z_1, \ldots , z_k\}$$ and certify that this matrix is the true Hermite matrix corresponding to the roots $$V({\mathcal I})$$. Applications of Hermite matrices include counting and locating real roots of the polynomials and certifying their existence.

Tulay Ayyildiz Akoglu, Agnes Szanto

On Parametric Border Bases

We study several properties of border bases of parametric polynomial ideals and introduce a notion of a minimal parametric border basis. It is especially important for improving the quantifier elimination algorithm based on the computation of comprehensive Gröbner systems.

Yosuke Sato, Hiroshi Sekigawa, Ryoya Fukasaku, Katsusuke Nabeshima

Reliable Computation of the Singularities of the Projection in of a Generic Surface of

Computing efficiently the singularities of surfaces embedded in $$\mathbb R^3$$ is a difficult problem, and most state-of-the-art approaches only handle the case of surfaces defined by polynomial equations. Let F and G be $$C^\infty $$ functions from $$\mathbb R^4$$ to $$\mathbb R$$ and $$\mathcal M=\{(x,y,z,t) \in \mathbb R^4 \, | \, F(x,y,z,t)=G(x,y,z,t)=0\}$$ be the surface they define. Generically, the surface $$\mathcal M$$ is smooth and its projection $$\Omega $$ in $$\mathbb R^3$$ is singular. After describing the types of singularities that appear generically in $$\Omega $$, we design a numerically well-posed system that encodes them. This can be used to return a set of boxes that enclose the singularities of $$\Omega $$ as tightly as required. As opposed to state-of-the art approaches, our approach is not restricted to polynomial mapping, and can handle trigonometric or exponential functions for example.

Sény Diatta, Guillaume Moroz, Marc Pouget

Evaluation of Chebyshev Polynomials on Intervals and Application to Root Finding

In approximation theory, it is standard to approximate functions by polynomials expressed in the Chebyshev basis. Evaluating a polynomial f of degree n given in the Chebyshev basis can be done in O(n) arithmetic operations using the Clenshaw algorithm. Unfortunately, the evaluation of f on an interval I using the Clenshaw algorithm with interval arithmetic returns an interval of width exponential in n. We describe a variant of the Clenshaw algorithm based on ball arithmetic that returns an interval of width quadratic in n for an interval of small enough width. As an application, our variant of the Clenshaw algorithm can be used to design an efficient root finding algorithm .

Viviane Ledoux, Guillaume Moroz

Proving Two Conjectural Series for and Discovering More Series for

We give a proof of two identities involving binomial sums at infinity conjectured by Zhi-Wei Sun. In order to prove these identities, we use a recently presented method i.e., we view the series as specializations of generating series and derive integral representations. Using substitutions, we express these integral representations in terms of cyclotomic harmonic polylogarithms. Finally, by applying known relations among the cyclotomic harmonic polylogarithms, we derive the results. These methods are implemented in the computer algebra package HarmonicSums.

Jakob Ablinger

Generalized Integral Dependence Relations

A generalization of integral dependence relations in a ring of convergent power series is studied in the context of symbolic computation. Based on the theory of Grothendieck local duality on residues, an effective algorithm is introduced for computing generalized integral dependence relations. It is shown that, with the aid of local cohomology, generalized integral dependence relations in the ring of convergent power series can be computed in a polynomial ring. An extension of the proposed method to parametric cases is also discussed.

Katsusuke Nabeshima, Shinichi Tajima

Hilbert-Type Dimension Polynomials of Intermediate Difference-Differential Field Extensions

Let K be an inversive difference-differential field and L a (not necessarily inversive) finitely generated difference-differential field extension of K. We consider the natural filtration of the extension L/K associated with a finite system $$\eta $$ of its difference-differential generators and prove that for any intermediate difference-differential field F, the transcendence degrees of the components of the induced filtration of F are expressed by a certain numerical polynomial $$\chi _{K, F,\eta }(t)$$. This polynomial is closely connected with the dimension Hilbert-type polynomial of a submodule of the module of Kähler differentials $$\varOmega _{L^{*}|K}$$ where $$L^{*}$$ is the inversive closure of L. We prove some properties of polynomials $$\chi _{K, F,\eta }(t)$$ and use them for the study of the Krull-type dimension of the extension L/K. In the last part of the paper, we present a generalization of the obtained results to multidimensional filtrations of L/K associated with partitions of the sets of basic derivations and translations.

Alexander Levin

Comprehensive LU Factors of Polynomial Matrices

The comprehensive LU decomposition of a parametric matrix consists of a case analysis of the LU factors for each specialization of the parameters. Special cases can be discontinuous with respect to the parameters, the discontinuities being triggered by zero pivots encountered during factorization. For polynomial matrices, we describe an implementation of comprehensive LU decomposition in Maple, using the RegularChains package.

Ana C. Camargos Couto, Marc Moreno Maza, David Linder, David J. Jeffrey, Robert M. Corless

Sublinear Cost Low Rank Approximation via Subspace Sampling

Low Rank Approximation (LRA) of a matrix is a hot research subject, fundamental for Matrix and Tensor Computations and Big Data Mining and Analysis. Computations with LRA can be performed at sublinear cost, that is, by using much fewer memory cells and arithmetic operations than an input matrix has entries. Although every sublinear cost algorithm for LRA fails to approximate the worst case inputs, we prove that our sublinear cost variations of a popular subspace sampling algorithm output accurate LRA of a large class of inputs.Namely, they do so with a high probability (whp) for a random input matrix that admits its LRA. In other papers we propose and analyze other sublinear cost algorithms for LRA and Linear Least Sqaures Regression. Our numerical tests are in good accordance with our formal results.

Victor Y. Pan, Qi Luan, John Svadlenka, Liang Zhao

CUR LRA at Sublinear Cost Based on Volume Maximization

A matrix algorithm runs at sublinear cost if it uses much fewer memory cells and arithmetic operations than the input matrix has entries. Such algorithms are indispensable for Big Data Mining and Analysis, where input matrices are so immense that one can only access a small fraction of all their entries. Typically, however, such matrices admit their Low Rank Approximation (LRA), which one can access and process at sublinear cost. Can, however, we compute LRA at sublinear cost? Adversary argument shows that no algorithm running at sublinear cost can output accurate LRA of worst case input matrices or even of the matrices of small families of our Appendix A, but we prove that some sublinear cost algorithms output a reasonably close LRA of a matrix W if (i) this matrix is sufficiently close to a low rank matrix or (ii) it is a Symmetric Positive Semidefinite (SPSD) matrix that admits LRA. In both cases supporting algorithms are deterministic and output LRA in its special form of CUR LRA, particularly memory efficient. The design of our algorithms and the proof of their correctness rely on the results of extensive previous study of CUR LRA in Numerical Linear Algebra using volume maximization. In case (i) we apply Cross-Approximation (C-A) iterations, running at sublinear cost and computing accurate LRA worldwide for more than a decade. We provide the first formal support for this long-known empirical efficiency assuming non-degeneracy of the initial submatrix of at least one C-A iteration. We cannot ensure non-degeneracy at sublinear cost for a worst case input but prove that it holds with a high probability (whp) for any initialization in the case of a random or randomized input. Empirically we can replace randomization with sparse multiplicative preprocessing of an input matrix, performed at sublinear cost. In case (ii) we make no additional assumptions about the input class of SPSD matrices admitting LRA or about initialization of our sublinear cost algorithms for CUR LRA, which promise to be practically valuable. We hope that proper combination of our deterministic techniques with randomized LRA methods, popular among Computer Science researchers, will lead them to further progress in LRA.

Qi Luan, Victor Y. Pan

New Practical Advances in Polynomial Root Clustering

We report an ongoing work on clustering algorithms for complex roots of a univariate polynomial p of degree d with real or complex coefficients. As in their previous best subdivision algorithms our root-finders are robust even for multiple roots of a polynomial given by a black box for the approximation of its coefficients, and their complexity decreases at least proportionally to the number of roots in a region of interest (ROI) on the complex plane, such as a disc or a square, but we greatly strengthen the main ingredient of the previous algorithms. We build the foundation for a new counting test that essentially amounts to the evaluation of a polynomial p and its derivative $$p'$$ , which is a major benefit, e.g., for sparse polynomials p. Moreover with evaluation at about $$\log (d)$$ points (versus the previous record of order d) we output correct number of roots in a disc whose contour has no roots of p nearby. Our second and less significant contribution concerns subdivision algorithms for polynomials with real coefficients. Our tests demonstrate the power of the proposed algorithms.

Rémi Imbach, Victor Y. Pan

On the Chordality of Simple Decomposition in Top-Down Style

Simple decomposition of polynomial sets computes conditionally squarefree triangular sets or systems with certain zero or ideal relationships with the polynomial sets. In this paper we study the chordality of polynomial sets occurring in the process of simple decomposition in top-down style. We first reformulate Wang’s algorithm for simple decomposition in top-down style so that the decomposition process can be described in an inductive way. Then we prove that for a polynomial set whose associated graph is chordal, all the polynomial sets in the process of Wang’s algorithm for computing simple decomposition of this polynomial set have associated graphs which are subgraphs of the input chordal graph.

Chenqi Mou, Jiahua Lai

Automatic Synthesis of Merging and Inserting Algorithms on Binary Trees Using Multisets in Theorema

We demonstrate the automatic proof–based synthesis of merging and inserting algorithms for [sorted] binary trees, using the notion of multisets, in the Theorema system. Each algorithm is extracted from the proof of the conjecture based on the specification of the desired function, in the form of a list of [conditional] equalities, which can be directly executed. The proofs are performed in natural style, using general techniques, but most importantly efficient inference rules and strategies specific for the domains involved. In particular we present specific techniques for the construction of arbitrarily nested recursive algorithms by general Noetherian induction, as well as a systematic method for the generation of the conjectures and consequently of the algorithms for the auxiliary functions needed in the main function.

Isabela Drămnesc, Tudor Jebelean

Algebraic Analysis of Bifurcations and Chaos for Discrete Dynamical Systems

This paper deals with the stability, bifurcations and chaotic behaviors of discrete dynamical systems by using methods of symbolic computation. We explain how to reduce the problems of analyzing the stability, bifurcations and chaos induced by snapback repellers to algebraic problems, and solve them by using an algorithmic approach based on methods for solving semi-algebraic systems. The feasibility of the symbolic approach is demonstrated by analyses of the dynamical behaviors for several discrete models.

Bo Huang, Wei Niu

Security and Cryptography

Frontmatter

Acceleration of Spatial Correlation Based Hardware Trojan Detection Using Shared Grids Ratio

Due to mostly economic reasons almost all countries including the developed ones have to handle integrated circuit designs to a foreign fab for manufacturing, which raises the security issues like intentional malicious circuit (hardware Trojan) insertion by an adversary. A previously proposed method to address these security issues detects hardware Trojan using the spatial correlations in accordance with delay based side channel analysis. However, it is never applied to full circuits and it requires too many path delay computations to select correlated path pairs. In this paper, we first apply the method and present the results for full circuits and then, the method is accelerated by proposing a novel path selection criterion which avoids the computation of path delays. In terms of detection success, the resultant method performs similar to the previous one, but in a much faster fashion.

Fatma Nur Esirci, Alp Arslan Bayrakci

A Parallel GPU Implementation of SWIFFTX

The SWIFFTX algorithm is one of the candidates of SHA-3 Hash Competition that uses the number theoretic transform (NTT). It has 256-byte input blocks and 65-byte output blocks. In this paper, a parallel implementation of the algorithm and particular techniques to make it faster on GPU are proposed. We target version 6.1 of NVIDIA®CUDA™compute architecture that employs an ISA (Instruction Set Architecture) called Parallel Thread Execution (PTX) which possesses special instrinsics, hence we modify the reference implementation for better results. Experimental results indicate almost 10x improvement in speed and 5 W decrease in power consumption per $$2^{16}$$ hashes.

Metin Evrim Ulu, Murat Cenk

Computing an Invariant of a Linear Code

In this work we present an efficient algorithm that generates the leader codewords of a linear code in an incremental form. On the other hand, using the set of leader codewords we define a transformation that remains invariant only if the codes are equivalent which is used as a signature for checking the code equivalence problem. An upper bound on the weight of the codewords is imposed to this algorithm in order to get a smallest set that can be also used as a signature for the ‘Code Equivalence Problem’.

Mijail Borges-Quintana, Miguel Ángel Borges-Trenard, Edgar Martínez-Moro, Gustavo Torres-Guerrero

Generalized Secret Sharing Schemes Using NMDS Codes

Mehta et al. [11] recently proposed an $${{\,\mathrm{NMDS}\,}}$$ code-based secret sharing scheme having a richer access structure than the traditional (t, n) threshold secret sharing schemes, and is based on two mutually nonmonotonic sets of user groups of sizes t and $$t-1$$ respectively, where $$n \ge t > 1$$ corresponds to the total number of users. We give a full generalization of their scheme with complete security proofs. We propose an efficient generalized secret sharing scheme constructed using $${{\,\mathrm{N^{\mu }MDS}\,}}$$ codes with time complexity of $$O(n^3)$$. The scheme accepts an access structure constructed using $$\mu +1$$ mutually nonmonotonic sets of user groups with sizes, $$t, t-1, \dots , t-\mu $$, respectively, where $$1 \le \mu < t$$, and the parameter t defines the threshold such that all user groups of size greater than t can recover the secret. The proposed secret sharing scheme is perfect and ideal and has robust cheating detection and cheater identification features.

Sanyam Mehta, Vishal Saraswat

Exploiting Linearity of Modular Multiplication

The XOR and the addition $$\boxplus $$ operations have been widely used as building blocks for many cryptographic primitives. These operations and the multiplication $$\odot $$ operation are successively used in the design of IDEA and the MESH block ciphers. This work presents several interesting algebraic properties of the multiplication operation. By fixing one operand, we obtain vector valued function $$\varvec{g}_Z$$ on $$\mathbb Z_{2}^n$$, associated with $$\odot $$. In this paper we show that the nonlinearity of $$\varvec{g}_Z$$ remains the same under some transformations of Z and moreover we give an upper bound for the nonlinearity of $$\varvec{g}_Z$$ when Z is a power of 2. Under weak-key assumptions, we furthermore present a list of new linear relations for 1-round IDEA cipher, some of directly derived and others algorithmically generated using these relations and known ones. We extend the largest linear weak key class for IDEA cipher with size $$2^{23}$$ to derive such a class with sizes $$2^{24}$$. Under the independent key subblocks (subkeys) and weak-key assumptions we derive many linear relations for IDEA cipher using linear relations for 1-round IDEA cipher.

Hamdi Murat Yıldırım

Combinatorics, Codes, Designs and Graphs

Frontmatter

On a Weighted Spin of the Lebesgue Identity

Alladi studied partition theoretic implications of a two variable generalization of the Lebesgue identity. In this short note, we focus on a slight variation of the basic hypergeometric sum that Alladi studied. We present two new partition identities involving weights.

Ali Kemal Uncu

Edge-Critical Equimatchable Bipartite Graphs

A graph is called equimatchable if all of its maximal matchings have the same size. Lesk et al. [6] provided a characterization of equimatchable bipartite graphs. Since this characterization is not structural, Frendrup et al. [4] also provided a structural characterization for equimatchable graphs with girth at least five; in particular, a characterization for equimatchable bipartite graphs with girth at least six. In this work, we extend the partial characterization of Frendrup et al. [4] to equimatchable bipartite graphs without any restriction on girth. For an equimatchable graph, an edge is said to be critical-edge if the graph obtained by removal of this edge is not equimatchable. An equimatchable graph is called edge-critical if every edge is critical. Reducing the characterization of equimatchable bipartite graphs to the characterization of edge-critical equimatchable bipartite graphs, we give two characterizations of edge-critical equimatchable bipartite graphs.

Yasemin Büyükçolak, Didem Gözüpek, Sibel Özkan

Determining the Rank of Tensors in

Let $$\mathbb {F}_q$$ be a finite field of order q. This paper uses the classification in [7] of orbits of tensors in $$\mathbb {F}_q^2\otimes \mathbb {F}_q^3\otimes \mathbb {F}_q^3$$ to define two algorithms that take an arbitrary tensor in $$\mathbb {F}_q^2\otimes \mathbb {F}_q^3\otimes \mathbb {F}_q^3$$ and return its orbit, a representative of its orbit, and its rank.

Nour Alnajjarine, Michel Lavrauw

Second Order Balance Property on Christoffel Words

In this paper we study the balance matrix that gives the order of balance of any binary word. In addition, we define for Christoffel words a new matrix called second order balance matrix. This matrix gives more information on the balance property of a word that codes the number of occurrences of the letter 1 in successive blocks of the same length for the studied Christoffel word. By taking the maximum of the Second order balance matrix we define the second order of balance and we are able to order the Christoffel words according to these values. Our construction uses extensively the continued fraction associated with the slope of each Christoffel word, and we prove a recursive formula based on fine properties of the Stern-Brocot tree to construct second order matrices.

Lama Tarsissi, Laurent Vuillon

IPO-Q: A Quantum-Inspired Approach to the IPO Strategy Used in CA Generation

Covering arrays are combinatorial structures, that can be considered generalizations of orthogonal arrays and find application in the field of automated software testing amongst others. The construction of covering arrays is a highly researched topic, with existing works focusing on heuristic, metaheuristic and combinatorial algorithms to successfully construct covering arrays with a small number of rows. In this paper, we introduce the IPO-Q algorithm which combines a recently introduced quantum-inspired evolutionary algorithm with the widely used in-parameter order (IPO) strategy for covering array generation. We implemented different versions of this algorithm and evaluate them, by means of selected covering array instances, against each other and against an algorithm implementing the IPO strategy.

Michael Wagner, Ludwig Kampel, Dimitris E. Simos

A Fast Counting Method for 6-Motifs with Low Connectivity

A k-motif (or graphlet) is a subgraph on k nodes in a graph or network. Counting of motifs in complex networks has been a well-studied problem in network analysis of various real-word graphs arising from the study of social networks and bioinformatics. In particular, the triangle counting problem has received much attention due to its significance in understanding the behavior of social networks. Similarly, subgraphs with more than 3 nodes have received much attention recently. While there have been successful methods developed on this problem, most of the existing algorithms are not scalable to large networks with millions of nodes and edges.The main contribution of this paper is a preliminary study that genaralizes the exact counting algorithm provided by Pinar, Seshadhri and Vishal to a collection of 6-motifs. This method uses the counts of motifs with smaller size to obtain the counts of 6-motifs with low connectivity, that is, containing a cut-vertex or a cut-edge. Therefore, it circumvents the combinatorial explosion that naturally arises when counting subgraphs in large networks.

Taha Sevim, Muhammet Selçuk Güvel, Lale Özkahya

LaserTank is NP-Complete

We show that the classical game LaserTank is $$\mathrm {NP}$$-complete, even when the tank movement is restricted to a single column and the only blocks appearing on the board are mirrors and solid blocks. We show this by reducing 3-SAT instances to LaserTank puzzles.

Per Alexandersson, Petter Restadh

Data Modeling and Machine Learning

Frontmatter

Improved Cross-Validation for Classifiers that Make Algorithmic Choices to Minimise Runtime Without Compromising Output Correctness

Our topic is the use of machine learning to improve software by making choices which do not compromise the correctness of the output, but do affect the time taken to produce such output. We are particularly concerned with computer algebra systems (CASs), and in particular, our experiments are for selecting the variable ordering to use when performing a cylindrical algebraic decomposition of n-dimensional real space with respect to the signs of a set of polynomials.In our prior work we explored the different ML models that could be used, and how to identify suitable features of the input polynomials. In the present paper we both repeat our prior experiments on problems which have more variables (and thus exponentially more possible orderings), and examine the metric which our ML classifiers targets. The natural metric is computational runtime, with classifiers trained to pick the ordering which minimises this. However, this leads to the situation where models do not distinguish between any of the non-optimal orderings, whose runtimes may still vary dramatically. In this paper we investigate a modification to the cross-validation algorithms of the classifiers so that they do distinguish these cases, leading to improved results.

Dorian Florescu, Matthew England

A Numerical Efficiency Analysis of a Common Ancestor Condition

The aim of this paper is to understand if the sufficient condition for the existence of a common ancestor for some variables in a larger graph discovered by Steudel and Ay is worth checking. The goodness of this criterion will be tested with a numerical method.

Luca Carlini, Nihat Ay, Christiane Görgen

Optimal Transport to a Variety

We study the problem of minimizing the Wasserstein distance between a probability distribution and an algebraic variety. We consider the setting of finite state spaces and describe the solution depending on the choice of the ground metric and the given distribution. The Wasserstein distance between the distribution and the variety is the minimum of a linear functional over a union of transportation polytopes. We obtain a description in terms of the solutions of a finite number of systems of polynomial equations. The case analysis is based on the ground metric. A detailed analysis is given for the two bit independence model.

Türkü Özlüm Çelik, Asgar Jamneshan, Guido Montúfar, Bernd Sturmfels, Lorenzo Venturello

SFV-CNN: Deep Text Sentiment Classification with Scenario Feature Representation

In this paper, we present a deep learning approach to represent the scenario-related features of a sentence for text classification, and also demonstrate an interesting application which shows the nearest scenarios for a sentence. In order to improve the performance of text classification, it is necessary to make them be aware of the scenario switching at the background of the texts. We propose a CNN based sentiment analysis model named SFV-CNN for sentence classification. The proposed model can be improved by assigning suitable window for each scenario corpus in scenario word embedding training. Our experiments demonstrate that SFV-CNN brings an improvement in accuracy and also shows more obvious advantages when test on datasets across scenarios.

Haoliang Zhang, Hongbo Xu, Jinqiao Shi, Tingwen Liu, Jing Ya

Reinforcement Learning Based Interactive Agent for Personalized Mathematical Skill Enhancement

Traditional intelligent systems recommend a teaching sequence to individual students without monitoring their ongoing learning attitude. It causes frustrations for students to learn a new skill and move them away from their target learning goal. As a step to make the best teaching strategy, in this paper a Personalized Skill-Based Math Recommender (PSBMR) framework has been proposed to automatically recommend pedagogical instructions based on a student’s learning progress over time. The PSBMR utilizes an adversarial bandit in contrast to the classic multi-armed bandit (MAB) problem to estimate the student’s ability and recommend the task as per his skill level. However, this paper proposes an online learning approach to model a student concept learning profile and used the Exp3 algorithm for optimal task selection. To verify the framework, simulated students with different behavioral complexity have been modeled using the Q-matrix approach based on item response theory. The simulation study demonstrates the effectiveness of this framework to act fairly with different groups of students to acquire the necessary skills to learn basic mathematics.

Muhammad Zubair Islam, Kashif Mehmood, Hyung Seok Kim

Common Vector Approach Based Image Gradients Computation for Edge Detection

In this study, the concept of Common Vector Approach (CVA) is adopted for image gradients computation in terms of revealing edge maps stated on images. Firstly, noise stated on image is smoothed by Gaussian filtering, secondly gradient map computation using CVA is carried out, then the angle and direction maps are obtained from the gradient map and lastly peak points are selected and a smart routing procedure is performed to linking them. With an unusual methodology, the derivatives of image through vertical and horizontal directions have obtained by utilizing the CVA, which is the crucial step and gained the novelty to this work. To compare results objectively, we have judged the performance with respect to a comparison metric called ROC Curve analysis. As a contribution to the edge detection area, CVA-ED presents satisfactory results and edge maps produced can be used in the tasks of object tracking, motion estimation and image retrieval.

Sahin Isik, Kemal Ozkan

Optimizing Query Perturbations to Enhance Shape Retrieval

3D Shape retrieval algorithms use shape descriptors to identify shapes in a database that are the most similar to a given key shape, called the query. Many shape descriptors are known but none is perfect. Therefore, the common approach in building 3D Shape retrieval tools is to combine several descriptors with some fusion rule. This article proposes an orthogonal approach. The query is improved with a Genetic Algorithm. The latter makes evolve a population of perturbed copies of the query, called clones. The best clone is the closest to its closest shapes in the database, for a given shape descriptor. Experimental results show that improving the query also improves the precision and completeness of shape retrieval output. This article shows evidence for several shape descriptors. Moreover, the method is simple and massively parallel.

Bilal Mokhtari, Kamal Eddine Melkemi, Dominique Michelucci, Sebti Foufou

Authorship Attribution by Functional Discriminant Analysis

Recognizing the author of a given text is a very difficult task that relies on several complicated and correlated criterias. For this purpose, several classification methods are used (neuronal network, discriminant analysis, SVM...). But a good representation of the text that keeps the maximum of the stylistic information is very important and has a considerable influence on the result. In this paper, we will tackle the problem of the authorship attribution for very long texts using the discriminant analysis extended to the functional case after presenting the texts as elements of a separable Hilbert space.

Chahrazed Kettaf, Abderrahmane Yousfate

Tools and Software Track

Frontmatter

An Overview of Geometry Plus Simulation Modules

We give an overview of the open-source library “G+Smo”. G+Smo is a C++ library that brings together mathematical tools for geometric design and numerical simulation. It implements the relatively new paradigm of isogeometric analysis, which suggests the use of a unified framework in the design and analysis pipeline. G+Smo is an object-oriented, cross-platform, fully templated library and follows the generic programming principle, with a focus on both efficiency and ease of use. The library aims at providing access to high quality, open-source software to the community of numerical simulation and beyond.

Angelos Mantzaflaris

DD-Finite Functions Implemented in Sage

We present here the Sage package dd_functions which provides symbolic features to work with DD-finite functions, a natural extension of the class of holonomic or D-finite functions, on the computer. Closure properties, composition of DD-finite functions and sequence extraction are key features of this package. All these operations reduce the problem to linear algebra computations where classical division-free algorithms are used.

Antonio Jiménez-Pastor

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise