Fake scientific reviews

Playing Regex Golf with Genetic Programming

ACM Genetic and Evolutionary Computation Conference (GECCO) 2014 Vancouver

Real review

An interesting paper. The results obtained by means of GP seem to be superior to other methods or as well as human players.

In some cases, the performance gaps, i.e., the performance difference between the proposed ones and the best ones, are very large. This will be solved with some improvement of GP based approach?

The acquired expressions with other methods or humans should be presented in some cases. This will clarify why the proposed method is successful in some cases and not in other cases.

Fake review (ACCEPT)

Regex golf has recently emerged as a specific kind of code golf, i.e., unstructured and informal programming competitions aimed at writing the shortest code solving a particular problem. A problem in regex golf usually consists in writing the shortest regular expression which matches all the strings in a given list and does not match any of the strings in another given list. In this paper, a methodology based on Genetic Programming for generating solutions to regex golf problems is proposed—a regex golf player. A population of candidate regular expressions represented as trees and constructed with carefully selected regular expression operators and constants is generated.

This paper is solid work that should definitely be accepted because it shows an ability to generate solutions for the regex golf problem. The authors have done a good job in developing the scheme and the general idea of the paper. The authors have also shown a clear connection between the concept of Genetic Programming and the solution to the regex golf problem. I think this is a good result. The paper is written well and the authors have done a good job in developing their scheme. However, some issues remain. The paper is not clear about the significance of the tree structure. Does it reflect the fact that the tree structure is only used to encode the solutions to the problem? Is it the case that this structure is only used to encode the solutions to the problem? If so, then it is unclear what happens to the solution when a new tree is created? What happens to the solution when the current tree is deleted?

Fake review (REJECT)

The experimental section is unclear because the paper does not present any data in the experiment. The proposed approach is difficult to evaluate, as the paper presents results only when an operator is used and does not describe the number of operators or the number of operators. The experiment is also unclear because the experiment was performed on a different machine than the one used to evaluate the results presented in the paper. The results are not convincing, as performance on the machine was limited by the number of operators used, not by the number of regular expressions used.

The paper is not clear as to whether it is possible to use a subset of operators in the same regular expression, thus, there are no guarantees that performance will be comparable to that of a single operator. The experiment is also not convincing because it is unclear whether the proposed approach will work on other kinds of regular expressions. The results are not convincing, because performance on the machine was limited by the number of operators used, not by the number of regular expressions used. The paper is not clear as to whether it is possible to use a subset of operators in the same regular expression, thus, there are no guarantees that performance will be comparable to that of a single operator.

Syntactical Similarity Learning by means of Grammatical Evolution

14th International Conference on Parallel Problem Solving from Nature (PPSN) 2016 Edinburgh

Real review

The paper proposes a Grammatical Evolution (GE) based approach to the task of syntactic similarity learning, in unstructured text streams. It does this through example training, such that a similarity measuring function is evolved, which, given two strings, states their degree of similarity. The paper is very well written, and presents interesting modifications to standard GE, such as the use of alternative transformations for mapping, to avoid non-mapping individuals.

Not as much information as desirable is provided regarding the underlying system, which makes results replication very hard (admittedly, this would always be the case, given the ad-hoc language and VM interpreter). Still, information such as how the crossover operator works (in standard GE, any point in each individual can be chosen; I am assuming that here the same point must be picked in both individuals). Which initialisation method was used (very important for GE performance); number of generations (only population sizes were reported); etc. Finally, 5 runs for each experiment is a very small number (especially if random initialisation was used for GE), and makes the results and their analysis lose some significance. A table of (evolutionary) experimental setup would be ideal, if space allows one.

Fake review (ACCEPT)

Several research efforts have shown that a similarity function synthesized from examples may capture an application-specific similarity criterion in a way that fits the application needs more effectively than a generic distance definition. The algorithm takes in input pairs of strings along with an indication of whether they adhere or not adhere to the same syntactic pattern. The proposal is assessed on patterns representative of practical applications. The paper is very good and must be accepted because it is well written and well structured.

The introduction is interesting but not difficult to understand. The authors have a clear idea of what they are doing, but the experiments are a little bit disappointing. I think the experiments should be included because they suggest that the proposed algorithm needs a lot of work to be really useful. I have to be a little bit skeptical about the relevance of the proposed algorithms, but I do think that the proof of concept is useful and easy to understand. I am not sure why the authors chose to incorporate the experimental results. Can you explain why this was done, and how much effort was involved? The paper does not provide any comparison with the proposed algorithms. The experiment is not very clear and the results are not very convincing. How much effort did you have to make it work? Is it true that the proposed algorithms need a lot of effort?

FAKE REVIEW (REJECT)

The experimental section is not convincing and the paper should be rejected because the analysis is not sufficiently convincing. The paper should also be tested on a different set of data to show whether it is the optimal solution. The authors do not provide an evaluation of their approach on a specific use case. I am not sure whether the results are really representative of real world applications. The experimental section is not sufficient. I would like to see a better explanation of the difference between the two results.

Active Learning of Regular Expressions for Entity Extraction

IEEE Transactions on Cybernetics 2017

REAL REVIEW

This paper investigates a very interesting topic on active learning of regular expression for entity extraction using genetic programming. The paper is generally well-written, the English is good and clear, and the organisation is good too. A number of experiments have been conducted and a reasonably comprehensive analysis has been given based on the results.


What are the additional ways in which the paper could be improved?: This paper investigates a very interesting topic on active learning of regular expression for entity extraction using genetic programming. The paper is generally well-written, the English is good and clear, and the organisation is good too. A number of experiments have been conducted and a reasonably comprehensive analysis has been given based on the results.


There are also some limitations and issues, which can be seen as follows:

What is the representation of GP in this work ? Which types of GP algorithms are used ? Representation is the key for understanding the system/approach. A figure can help.

Section IV is really long and it is very hard to get the overall picture of the proposed system ? An example or a flowchart to show the connections, steps of the proposed system will be very helpful.

Please clearly state the motivation of this work. It would be be better to give a subsection.

A particularly important point is why GP is used here. The reason of “due to recent results which have advanced significantly over all earlier approaches, ….”. It can be reason, but I believe there should be more deep underlying reasons why GP is good fore this task.

Please give a subsection of major contribution (and novelty)

What are the parameter settings of the algorithms in the experiments ? e.g. the values for population size, crossover rate, mutation rate, the number of maximum generations, and all other parameters.

How many runs that GP has been executed ? It has been discussed in the community that at least 30 independent runs of GP should be executed in order to make any claim based on the results.

The format for references are not consistent. Please fix it.

FAKE REVIEW (ACCEPT)

I recommend acceptance of this solid work as it represents a significant contribution to the literature and I think the system is well-suited to the task. If the system were to be used to extract the most relevant entity, it would also be interesting to see whether the system could generalize to other datasets and whether its performance is comparable to the existing approaches. The proposed system is also very novel as it is not in any previous work. This work demonstrates a potential of the system and is an example of the relevance of using a Genetic-based system for extracting the most relevant entity. This is a significant contribution to the literature and I think the system is well-suited to the task. The proposed system is also interesting as it is not in any previous work.

I am also not familiar with the work of the authors and it would be interesting to know whether the system is of similar capabilities to other approaches or whether it differs from those used in the literature. I am interested about the system's performance in terms of performance in the different datasets. I am not sure how well the system would scale to larger datasets but I am also not sure whether the system could scale to more than 1,000,000 datasets. Is it possible to scale the system to such a large dataset? Is the system able to perform a lot of work in the same time? I am not sure if the system is able to scale to as much as the authors claim.

FAKE REVIEW (REJECT)

During the learning process, the system digs into the input text for selecting the most appropriate extraction query to be submitted to the user in order to improve the current extractor. All the components of the system are carefully tailored to the peculiarities of active learning with Genetic Programming and of entity extraction from unstructured text. The proposal is evaluated in depth, on a number of challenging datasets and based on a realistic estimate of the user effort involved in answering each single query.

I cannot recommend acceptance of this paper because of the very limited results and high degree of technical complexity in the proposal, and the lack of the required tools for evaluation.

However, I would like to mention that the proposed design is interesting and can be used in a variety of situations. The paper presents a design for a novel approach to extracting entities from unstructured text. The proposed design is also interesting: it provides a novel approach for extracting entities from unstructured text in the context of a natural language processing task. The proposed approach is also novel: it is well-suited to the natural language processing task and is easy to implement, although this is not the case in the implementation of the system.