Advertisement
Original software publication| Volume 11, 100220, February 2022

ProcessPerformance: A portable and easy-to-use tool to measure resource consumption of running processes

Open AccessPublished:January 12, 2022DOI:https://doi.org/10.1016/j.simpa.2022.100220

      Highlights

      • ProcessPerformance is a tool to measure resource consumption of running processes.
      • It provides information about the utilization of CPU, memory and network resources.
      • ProcessPerformance is as an easy-to-use command-line tool.
      • It is implemented for the .Net Core platform, which runs in most operating systems.
      • It has been used in many projects to measure the efficiency of complex systems.

      Abstract

      The measurement of the resources consumed by an application at runtime is an important task in different scenarios such as program optimization, malware and bug detection, and hardware scaling. Although different tools exist for this purpose, they sometimes show some limitations such as operating system and hardware dependencies, performance overhead, and usage complexity. For this reason, we create ProcessPerformance, a portable and easy-to-use command-line tool that provides information about the CPU, memory, and network resources consumed by any combination of running processes. It also avoids the performance overhead caused by software and binary code injection.

      Keywords

      Code metadata
      Tabled 1
      Current code version1.1.1
      Permanent link to code/repository used for this code versionhttps://github.com/SoftwareImpacts/SIMPAC-2021-199
      Permanent link to Reproducible Capsulehttps://codeocean.com/capsule/7889504/tree/v1
      Legal Code LicenseMIT
      Code versioning system usedgit
      Software code languages, tools, and services usedC# 8.0
      Compilation requirements, operating environments & dependencies.Net Core 3.1+, TraceEvent 2.0.55
      If available Link to developer documentation/manualhttps://github.com/ComputationalReflection/ProcessPerformance/blob/master/README.md
      Support email for questions[email protected]

      1. Introduction

      It is sometimes necessary to measure the resources a process is consuming at runtime [
      • Nethercote N.
      • Seward J.
      Valgrind: A program supervision framework.
      ]. That measurement is a valuable piece of information to optimize applications consuming too many resources, scale the hardware resources depending on the running processes, identify potential malicious programs, and compare resource consumption of different processes [
      • Nethercote N.
      • Seward J.
      Valgrind: A program supervision framework.
      ]. Common resources the user needs to measure are CPU, memory, and network consumption [

      V. Salapura, K. Ganesan, A. Gara, M. Gschwind, J.C. Sexton, R.E. Walkup, Next-Generation Performance Counters: Towards Monitoring Over Thousand Concurrent Events, in: ISPASS 2008 - IEEE International Symposium On Performance Analysis Of Systems And Software, 2008, pp. 139–146.

      ]. Resource consumption data are usually collected by hardware monitors, additional routines implemented at the operating system level, or code injected in a program [
      • Larus J.R.
      • Ball T.
      Rewriting executable files to measure program behavior.
      ]. The choice of a measurement technique depends on different factors, such as the data to be measured, the potential impact of the measurement tools on the performance of the whole application, and the available hardware and software resources, among others [
      • Larus J.R.
      • Ball T.
      Rewriting executable files to measure program behavior.
      ].
      Modern operating systems include system monitors to supervise the usage of system resources in a computer [
      • Hoare C.A.R.
      Monitors: An operating system structuring concept.
      ]. Task managers are system monitor programs that provide information about computer performance, including the name of running processes, CPU and GPU load, I/O information, logged-in users, and operating system services [
      • Pothuganti K.
      • Haile A.
      • Pothuganti S.
      A comparative study of real time operating systems for embedded systems.
      ]. Example task managers are SysInternals Process Explorer [
      • Microsoft K.
      Process explorer v16.43.
      ], GNOME System Monitor [
      • The GNOME Project K.
      System monitor.
      ], and macOS Activity Monitor [
      • Apple Inc. K.
      Activity monitor user guide for mac OS Monterey.
      ]. These task managers provide a graphical user interface to give information to the user, but they do not facilitate the extraction of the data measured. On the contrary, tasklist [
      • Microsoft K.
      Tasklist.
      ], top [
      tOp. linux manual page.
      ], iotop [
      • Chazarain G.
      iotop. linux manual page.
      ] and nethogs [
      • Engelen A.
      Nethogs.
      ] are textual command-line task managers that make it easier to retrieve that information. iotop and nethogs only provide network information, and none of those tools support Windows, Linux, and macOS.
      Software instrumentation tools add pieces of code around source or binary code to measure dynamic resource consumption [
      • Pierce J.
      • Smith M.D.
      • Mudge T.
      Instrumentation tools.
      ]. They can be used to diagnose memory errors, evaluate runtime performance, generate trace information, and profile applications [
      • Nethercote N.
      • Seward J.
      Valgrind: A program supervision framework.
      ]. Valgrind is a binary instrumentation framework for different Unix-based executable files [
      • Nethercote N.
      • Seward J.
      Valgrind: A program supervision framework.
      ]. Apache Netbeans Profiler is a similar tool for Java programs [
      • Kostaras I.
      • Drabo C.
      • Juneau J.
      • Reimers S.
      • Schröder M.
      • Wielenga G.
      Debugging and profiling.
      ]. The main drawback of software instrumentation tools is the memory and CPU consumption overhead they introduce with the instrumented code.
      Hardware Performance Counters (HPC) is another approach to obtain detailed information about application execution [

      L. Uhsadel, A. Georges, I. Verbauwhede, Exploiting Hardware Performance Counters, in: 5th Workshop On Fault Diagnosis And Tolerance In Cryptography, 2008, pp. 59–67.

      ]. HPC is based on hardware counters that register microprocessor activities upon the trace generation phase. Later, that information can be analyzed by tools such as Windows Reliability and Performance Monitor (perfmon) [
      • Microsoft L.
      Perfmon.
      ], SysInternals Process Monitor [
      • Microsoft L.
      Process monitor v3.86.
      ], and perf [
      Perf: Linux profiling with performance counters.
      ]. The main benefit of HPCs compared to software-based approaches is that HPCs provide lower performance overhead to obtain detailed performance information, but they are hardware-dependent [
      • Malone C.
      • Zahran M.
      • Karri R.
      Are hardware performance counters a cost effective way for integrity checking of programs.
      ].
      Some other approaches are based on modifying the behavior of the virtual machine that is used to run the software. The Java Management Extensions (JMX) framework provides a configurable mechanism for managing and monitoring Java applications [
      • Kreger H.
      Java management extensions for application management.
      ]. With Managed Beams (MBeans), programs can be instrumented to measure the runtime resources consumed by the application. Likewise, dotnet-trace is a .Net application that enables the collection of application traces without a native profiler [
      • Microsoft H.
      Dotnet-trace performance analysis utility.
      ]. These kinds of tools provide runtime information about runtime resource consumption, but only for applications executed on particular virtual machines.
      Process and System Utilities (psutil) is a cross-platform library for retrieving information of running processes, such as CPU, memory, disks, and network consumption [
      Psutil. cross-platform lib for process and system monitoring in python.
      ]. It is written in Python 3.4 and aims to monitor and profile systems. It supports most operating systems. psutils is delivered as an API, so users must write their own programs to visualize the resource consumption of running programs.
      In this paper, we present ProcessPerformance, an open-source multi-platform tool to monitor and retrieve the CPU, memory, and network resources consumed by any combination of running processes. Runtime process instrumentation is performed at the operating-system level, so no overhead is produced by high-level source code instrumentation. HPC information may be used by the operating software when the hardware supports HPC, causing almost no runtime overhead [
      • Malone C.
      • Zahran M.
      • Karri R.
      Are hardware performance counters a cost effective way for integrity checking of programs.
      ]. ProcessPerformance is implemented in the free open-source .Net Core platform that runs on Windows, Linux, and macOS. It provides an easy-to-use command-line interface that does not require the subsequent analysis of tracing log information. The services of ProcessPerformance can be used by any other application, since its source code is available for download at https://github.com/ComputationalReflection/ProcessPerformance.

      2. Application description and functions

      ProcessPerformance is designed to easily provide information about the resources consumed by any processes. It retrieves information about the CPU, memory, and network resource consumption during a period of time. ProcessPerformance is a portable open-source application implemented for the .Net Core platform. Memory and CPU usage is measured with the Process class in the System.Diagnostics namespace, which supports interaction with local and remote processes, event logs, and performance counters [
      • Microsoft H.
      Process.TotalProcessorTime property (system.diagnostics).
      ]. For process network consumption, we use the TraceEvent library [
      • Microsoft H.
      The microsoft.diagnostics.tracing.TraceEvent library.
      ] that allows us to collect and process event data of the processes running on the operating system. To get the information of the overall network traffic, ProcessPerformance uses the NetworkInterface class that provides configuration and statistical information for a network interface.
      What follows is a brief description of how ProcessPerformance gathers resource consumption information from the operating system, as illustrated in Fig. 1:
      Figure thumbnail gr1
      Fig. 1Gathering resource consumption information from the operating system. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
      • CPU consumption, measured as the percentage of use of the total CPU resources. It is computed with the following equation:
        CPUconsumption=totalprocessortime/clocktime/numberofsystemcores
        (1)


        The first operand is the TotalProcessorTime property of the Process class, which returns the time that the microprocessor spends working on a task (it is the sum of user and privileged processor time) [
        • Microsoft H.
        Process.TotalProcessorTime property (system.diagnostics).
        ]. Processor times are represented as blue rectangles in Fig. 1. Clock time is the elapsed execution time for the time interval measured (i.e., starttimeendtime in Fig. 1). The division of these two values gives us the percentage of CPU used by the process. That value is then divided by the number of system cores (the ProcessorCount property of the Environment class) to obtain the CPU consumption, because TotalProcessorTime represents the sum of working times for all the cores.
      • Overall network traffic refers to the number of bytes transferred across the network by all the processes since ProcessPerformance is executed. The GetIPv4Statistics method is used to get that information from the operating system, including the bytes sent (BytesSent property of IPv4InterfaceStatistics, shown as dark green rectangles in Fig. 1) and received (BytesReceived property, shown as light green rectangles in Fig. 1). With this information, ProcessPerformance displays both the number of bytes transferred and the transmission rate.
      • Process network consumption is the number of bytes transferred by one single process across the network since ProcessPerformance started measuring. A TraceEventSession object is used to register all the TCP/IP events triggered in the system. It follows the Observer design pattern, where listeners are registered to be notified when different events occur [
        • Gamma E.
        • Helm R.
        • Johnson R.
        • Vlissides J.
        • Patterns D.
        Elements of Reusable Object-Oriented Software.
        ]. ProcessPerformance registers itself for the TcpIpRecv and TcpIpSend events to store the information about data received and sent. Each time one of these two events is triggered, we check whether the process causing the data transfer is the one to be monitored and, if so, we update the variables counting the number of bytes transmitted by that process. Fig. 1 shows the data sent by a process with dark gray rectangles, while light gray boxes represent the data received.
      • Memory consumption. To measure the memory consumed by a process at runtime, we count the maximum size of working set memory used by a process (i.e., the PeakWorkingSet property of Process [
        • Microsoft E.
        Process.PeakWorkingSet64 property (system.diagnostics).
        ]). The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. Those pages are resident and available for an application to be used without triggering a page fault. The working set includes both shared and private data. The shared data comprises the pages that contain all the instructions that the process executes, including those from the process modules and the system libraries. As shown in Fig. 1, it is common that the working set memory used by a process grows at runtime as the program demands more memory from the operating system.
      Figure thumbnail gr2
      Fig. 2Output for ProcessPerformance chrome -interval:500.

      2.1 Usage and examples

      ProcessPerformance is an easy-to-use command-line tool. If the user runs
      In Windows, the application is directly run with ProcessPerformance. In Unix-based operating systems, it must be written dotnet ProcessPerformance.dll.
      ProcessPerformance -help the following command-line options are described:
      • process1process2processn. A space-separated list of the names or PIDs (process identifiers) of the processes to be monitored. If no process is passed, the overall system resources are displayed.
      • -interval:milliseconds. The interval used to gather the runtime information of resource consumption, expressed in milliseconds. The default value is 1,000 (one second).
      • -network:IP_address. IP address of the network interface used to measure data transmission.
      • -csv. Shows the output in comma-separated values (CSV) format.
      • -help. Describes the different command-line arguments.
      For example, the following command shows the CPU, memory, and network resources consumed by the Chrome web browser, every half second: ProcessPerformance chrome -interval:500. That command produces the output displayed in Fig. 2, where ProcessPerformance tells us that there are four different processes running Chrome, and displays the resources consumed by the four processes.
      ProcessPerformance allows measuring the resources consumed by a complex application or system, defined as a collection of running programs. Therefore, the following command can be used to measure overall resources used by a client–server application that runs an application server (Apache Tomcat), two persistence systems (PostgreSQL and Neo4j), and the Chrome web browser as a client:
      ProcessPerformance chrome tomcat neo4j postgres -network:192.168.137.2
      The output is shown in Fig. 3. The network traffic is displayed with two values: the sum of all the data transferred by the 22 processes (4 programs), and the data transmitted through the 192.168.137.2 network interface.
      Figure thumbnail gr3
      Fig. 3Output for ProcessPerformance chrome tomcat neo4j postgres -interval:500 -network:192.168.137.2.

      3. Impact

      When measuring the resources consumed by an application at runtime, it is important to define a statistically rigorous methodology and the correct use of tools [
      • Georges A.
      • Buytaert D.
      • Eeckhout L.
      Statistically rigorous java performance evaluation.
      ]. The characteristics of ProcessPerformance have made it a valuable tool to measure the runtime resources consumed by many kinds of applications. In the scenario of comparing techniques for the implementation of programming languages, ProcessPerformance has been used to measure runtime execution and memory consumption of program specialization [
      • Ortin F.
      • Garcia M.
      • McSweeney S.
      Rule-based program specialization to optimize gradually typed code.
      ], static single assignment (SSA) transformations [
      • Quiroga J.
      • Ortin F.
      SSA transformations to facilitate type inference in dynamically typed code.
      ], hybrid dynamic and static typing [
      • Ortin F.
      • Zapico D.
      • Pérez-Schofield J.B.G.
      • Garcia M.
      Including both static and dynamic typing in the same programming language.
      ], compiler implementation [
      • Garcia M.
      • Ortin F.
      • Quiroga J.
      Design and implementation of an efficient hybrid dynamic and static typing language.
      ], runtime type cache optimization [
      • Quiroga J.
      • Ortin F.
      • Llewellyn-Jones D.
      • Garcia M.
      Optimizing runtime performance of hybrid dynamically and statically typed languages for the .NET platform.
      ], intersection and union types [
      • Ortin F.
      • García M.
      Union and intersection types to support both dynamic and static typing.
      ], and type inference [
      • Ortin F.
      Type inference to optimize a hybrid statically and dynamically typed language.
      ]. It has also been used to compare the efficiency of different Python implementations [
      • Redondo J.M.
      • Ortin F.
      A comprehensive evaluation of common python implementations.
      ], the invokedynamic opcode included in Java 7 [
      • Conde P.
      • Ortin F.
      JINDY: a java library to support invokedynamic.
      ], the implementation of dynamic languages for the Java platform [
      • Ortin F.
      • Conde P.
      • Fernandez-Lanvin D.
      • Izquierdo R.
      The runtime performance of invokedynamic: An evaluation with a java library.
      ], and the adaptability of Java applications [
      • Lagartos I.
      • Redondo J.M.
      • Ortin F.
      Efficient runtime metaprogramming services for java.
      ].
      In the scenario of aspect-oriented programming (AOP) ProcessPerformance has been utilized to compare the efficiency of dynamic and static weavers for the Java [
      • Rodriguez-Prieto O.
      • Ortin F.
      • O’Shea D.
      Efficient runtime aspect weaving for java applications.
      ] and .Net platforms [
      • Felix J.M.
      • Ortin F.
      Efficient aspect weaver for the .NET platform.
      ], to analyze the suitability of AOP for distributed systems security [
      • Garcia M.
      • Llewellyn-Jones D.
      • Ortin F.
      • Merabti M.
      Applying dynamic separation of aspects to distributed systems security: a case study.
      ], and to measure the runtime performance of the DSAW AOP platform [
      • Ortin F.
      • Vinuesa L.
      • Felix J.M.
      The DSAW aspect-oriented software development platform.
      ].
      Likewise, our tool has measured memory and CPU consumption of various virtual machine implementations, such as the addition of structural intercession [
      • Ortin F.
      • Labrador M.A.
      • Redondo J.M.
      A hybrid class- and prototype-based object model to support language-neutral structural intercession.
      ] and dynamic inheritance to .Net [
      • Redondo J.M.
      • Ortin F.
      Efficient support of dynamic inheritance for class- and prototype-based languages.
      ], and the implementation of the nitrO virtual machine [
      • Ortin F.
      • Diez D.
      Designing an adaptable heterogeneous abstract machine by means of reflection.
      ]. ProcessPerformance has been used to measure network and memory consumption in the design and implementation of mobile applications for multiple platforms [
      • Miravet P.
      • Marín I.
      • Ortin F.
      • Rodríguez J.
      Framework for the declarative implementation of native mobile applications.
      ], including the DIMAG back-end module [
      • Miravet P.
      • Marín I.
      • Ortin F.
      • Rionda A.
      DIMAG: A framework for automatic generation of mobile applications for multiple platforms.
      ] and the LIZARD native interface generator [
      • Marin I.
      • Ortin F.
      • Pedrosa G.
      • Rodriguez J.
      Generating native user interfaces for multiple devices by means of model transformation.
      ].
      We have used ProcessPerformance to measure the network traffic generated by an infrastructure to deliver synchronous programming laboratories online [
      • Garcia M.
      • Quiroga J.
      • Ortin F.
      An infrastructure to deliver synchronous remote programming labs.
      ]. In this case, various applications were measured together, since the system comprises different processes. ProcessPerformance was a helpful tool to know the network traffic generated by the entire system, which was a critical aspect due to the limited Internet connections the students have in their households [
      • Garcia M.
      • Quiroga J.
      • Ortin F.
      An infrastructure to deliver synchronous remote programming labs.
      ].
      ProcessPerformance has also been utilized to measure training and inference time of machine learning models in different scenarios such as programmer classification [
      • Ortin F.
      • Rodriguez-Prieto O.
      • Pascual N.
      • Garcia M.
      Heterogeneous tree structure classification to label java programmers according to their expertise level.
      ], students’ performance prediction [
      • Riestra-González M.
      • del Puerto Paule-Ruí z M.
      • Ortin F.
      Massive LMS log data analysis for the early prediction of course-agnostic student performance.
      ], decompilation [
      • Escalada J.
      • Scully T.
      • Ortin F.
      Improving type information inferred by decompilers with supervised machine learning.
      ,
      • Ortin F.
      • Escalada J.
      Cnerator: A python application for the controlled stochastic generation of standard c source code.
      ], and analysis of binary files [
      • Escalada J.
      • Ortin F.
      • Scully T.
      An efficient platform for the automatic extraction of patterns in native code.
      ]. We have used our tool to compare the runtime resources consumed by different persistence systems such as graph databases [
      • Rodriguez-Prieto O.
      • Mycroft A.
      • Ortin F.
      An efficient and scalable platform for java source code analysis using overlaid graph representations.
      ], reflective persistence systems [
      • Ortin F.
      • Lopez B.
      • García Perez-Schofield J.B.
      Separating adaptable persistence attributes through computational reflection.
      ], aspect-oriented database evolution systems [
      • Pereira R.H.
      • Perez-Schofield J.B.G.
      • Ortin F.
      Modularizing application and database evolution – an aspect-oriented framework for orthogonal persistence.
      ], and orthogonal object-based persistence [
      • García Perez-Schofield J.
      • García Roselló E.
      • Ortin F.
      • Pérez Cota M.
      Visual zero: A persistent and interactive object-oriented programming environment.
      ].

      4. Limitations

      As mentioned in Section 2, ProcessPerformance uses the TraceEvent library to know the data sent and received by a particular process. TraceEvent was originally designed to parse the Event Tracing for Windows (ETW) events generated by the Windows operating system. Thus, ProcessPerformance only provides this particular information when run on a Windows operating systems. However, it is worth noting that the traffic information of the overall network is provided for any operating system.

      Declaration of Competing Interest

      The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

      Acknowledgments

      This work has been partially funded by the Spanish Department of Science, Innovation and Universities: project RTI2018-099235-B-I00. The authors have also received funds from the University of Oviedo, Spain through its support to official research groups (GR-2011-0040).

      References

        • Nethercote N.
        • Seward J.
        Valgrind: A program supervision framework.
        Electron. Notes Theor. Comput. Sci. 2003; 89: 44-66
      1. V. Salapura, K. Ganesan, A. Gara, M. Gschwind, J.C. Sexton, R.E. Walkup, Next-Generation Performance Counters: Towards Monitoring Over Thousand Concurrent Events, in: ISPASS 2008 - IEEE International Symposium On Performance Analysis Of Systems And Software, 2008, pp. 139–146.

        • Larus J.R.
        • Ball T.
        Rewriting executable files to measure program behavior.
        Softw. Pract. Exp. 1994; 24: 197-218
        • Hoare C.A.R.
        Monitors: An operating system structuring concept.
        Commun. ACM. 1974; 17: 549-557
        • Pothuganti K.
        • Haile A.
        • Pothuganti S.
        A comparative study of real time operating systems for embedded systems.
        Int. J. Innov. Res. Comput. Commun. Eng. 2016; 6: 12008-12014
        • Microsoft K.
        Process explorer v16.43.
        2021 (https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer)
        • The GNOME Project K.
        System monitor.
        2021 (https://help.gnome.org/users/gnome-system-monitor/stable)
        • Apple Inc. K.
        Activity monitor user guide for mac OS Monterey.
        2021 (https://support.apple.com/guide/activity-monitor/welcome/mac)
        • Microsoft K.
        Tasklist.
        2021 (https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/tasklist)
      2. tOp. linux manual page.
        2021 (https://man7.org/linux/man-pages/man1/top.1.html)
        • Chazarain G.
        iotop. linux manual page.
        2021 (https://www.man7.org/linux/man-pages/man8/iotop.8.html)
        • Engelen A.
        Nethogs.
        2021 (https://github.com/raboof/nethogs)
        • Pierce J.
        • Smith M.D.
        • Mudge T.
        Instrumentation tools.
        in: Fast Simulation Of Computer Architectures. Springer, US, Boston, MA1995: 47-86
        • Kostaras I.
        • Drabo C.
        • Juneau J.
        • Reimers S.
        • Schröder M.
        • Wielenga G.
        Debugging and profiling.
        in: Pro Apache NetBeans. Springer, 2020: 127-178
      3. L. Uhsadel, A. Georges, I. Verbauwhede, Exploiting Hardware Performance Counters, in: 5th Workshop On Fault Diagnosis And Tolerance In Cryptography, 2008, pp. 59–67.

        • Microsoft L.
        Perfmon.
        2021 (https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/perfmon)
        • Microsoft L.
        Process monitor v3.86.
        2021 (https://docs.microsoft.com/en-us/sysinternals/downloads/procmon)
      4. Perf: Linux profiling with performance counters.
        2021 (https://perf.wiki.kernel.org/index.php)
        • Malone C.
        • Zahran M.
        • Karri R.
        Are hardware performance counters a cost effective way for integrity checking of programs.
        in: Proceedings Of The Sixth ACM Workshop On Scalable Trusted Computing. STC’11. Association for Computing Machinery, New York, NY, USA2011: 71-76
        • Kreger H.
        Java management extensions for application management.
        IBM Syst. J. 2001; 40: 104-129
        • Microsoft H.
        Dotnet-trace performance analysis utility.
        2021 (https://docs.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-trace)
      5. Psutil. cross-platform lib for process and system monitoring in python.
        2021 (https://github.com/giampaolo/psutil)
        • Microsoft H.
        Process.TotalProcessorTime property (system.diagnostics).
        2021 (https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.process.totalprocessortime?view=netcore-3.1)
        • Microsoft H.
        The microsoft.diagnostics.tracing.TraceEvent library.
        2021 (https://github.com/Microsoft/perfview/blob/main/documentation/TraceEvent/TraceEventLibrary.md)
        • Gamma E.
        • Helm R.
        • Johnson R.
        • Vlissides J.
        • Patterns D.
        Elements of Reusable Object-Oriented Software.
        Addison-Wesley Reading, Massachusetts1995
        • Microsoft E.
        Process.PeakWorkingSet64 property (system.diagnostics).
        2021 (https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.process.peakworkingset64?view=netcore-3.1)
        • Georges A.
        • Buytaert D.
        • Eeckhout L.
        Statistically rigorous java performance evaluation.
        in: Proceedings Of The 22nd Annual ACM SIGPLAN Conference On Object-Oriented Programming Systems And Applications. OOPSLA. ACM, New York, NY, USA2007: 57-76
        • Ortin F.
        • Garcia M.
        • McSweeney S.
        Rule-based program specialization to optimize gradually typed code.
        Knowl.-Based Syst. 2019; 179: 145-173
        • Quiroga J.
        • Ortin F.
        SSA transformations to facilitate type inference in dynamically typed code.
        Comput. J. 2017; 60: 1300-1315
        • Ortin F.
        • Zapico D.
        • Pérez-Schofield J.B.G.
        • Garcia M.
        Including both static and dynamic typing in the same programming language.
        IET Softw. 2010; 4: 268-282
        • Garcia M.
        • Ortin F.
        • Quiroga J.
        Design and implementation of an efficient hybrid dynamic and static typing language.
        Softw. Pract. Exp. 2016; 46: 199-226
        • Quiroga J.
        • Ortin F.
        • Llewellyn-Jones D.
        • Garcia M.
        Optimizing runtime performance of hybrid dynamically and statically typed languages for the .NET platform.
        J. Syst. Softw. 2016; 113: 114-129
        • Ortin F.
        • García M.
        Union and intersection types to support both dynamic and static typing.
        Inform. Process. Lett. 2011; 111: 278-286
        • Ortin F.
        Type inference to optimize a hybrid statically and dynamically typed language.
        Comput. J. 2011; 54: 1901-1924
        • Redondo J.M.
        • Ortin F.
        A comprehensive evaluation of common python implementations.
        IEEE Softw. 2014; 32: 76-84
        • Conde P.
        • Ortin F.
        JINDY: a java library to support invokedynamic.
        Comput. Sci. Inf. Syst. 2014; 11: 47-68
        • Ortin F.
        • Conde P.
        • Fernandez-Lanvin D.
        • Izquierdo R.
        The runtime performance of invokedynamic: An evaluation with a java library.
        IEEE Softw. 2013; 31: 82-90
        • Lagartos I.
        • Redondo J.M.
        • Ortin F.
        Efficient runtime metaprogramming services for java.
        J. Syst. Softw. 2019; 153: 220-237
        • Rodriguez-Prieto O.
        • Ortin F.
        • O’Shea D.
        Efficient runtime aspect weaving for java applications.
        Inf. Softw. Technol. 2018; 100: 73-86
        • Felix J.M.
        • Ortin F.
        Efficient aspect weaver for the .NET platform.
        IEEE Lat. Am. Trans. 2015; 13: 1534-1541
        • Garcia M.
        • Llewellyn-Jones D.
        • Ortin F.
        • Merabti M.
        Applying dynamic separation of aspects to distributed systems security: a case study.
        IET Softw. 2012; 6: 231-248
        • Ortin F.
        • Vinuesa L.
        • Felix J.M.
        The DSAW aspect-oriented software development platform.
        Int. J. Softw. Eng. Knowl. Eng. 2011; 21: 891-929
        • Ortin F.
        • Labrador M.A.
        • Redondo J.M.
        A hybrid class- and prototype-based object model to support language-neutral structural intercession.
        Inf. Softw. Technol. 2014; 44: 199-219
        • Redondo J.M.
        • Ortin F.
        Efficient support of dynamic inheritance for class- and prototype-based languages.
        J. Syst. Softw. 2013; 86: 278-301
        • Ortin F.
        • Diez D.
        Designing an adaptable heterogeneous abstract machine by means of reflection.
        Inf. Softw. Technol. 2005; 47
        • Miravet P.
        • Marín I.
        • Ortin F.
        • Rodríguez J.
        Framework for the declarative implementation of native mobile applications.
        IET Softw. 2014; 8: 19-32
        • Miravet P.
        • Marín I.
        • Ortin F.
        • Rionda A.
        DIMAG: A framework for automatic generation of mobile applications for multiple platforms.
        in: Proceedings Of The 6th International Conference On Mobile Technology, Application And Systems. Mobility’09. 2009: 1-8
        • Marin I.
        • Ortin F.
        • Pedrosa G.
        • Rodriguez J.
        Generating native user interfaces for multiple devices by means of model transformation.
        Front. Inf. Technol. Electr. Eng. 2015; 16
        • Garcia M.
        • Quiroga J.
        • Ortin F.
        An infrastructure to deliver synchronous remote programming labs.
        IEEE Trans. Learn. Technol. 2021; 14: 161-172
        • Ortin F.
        • Rodriguez-Prieto O.
        • Pascual N.
        • Garcia M.
        Heterogeneous tree structure classification to label java programmers according to their expertise level.
        Future Gener. Comput. Syst. 2020; 105: 380-394
        • Riestra-González M.
        • del Puerto Paule-Ruí z M.
        • Ortin F.
        Massive LMS log data analysis for the early prediction of course-agnostic student performance.
        Comput. Educ. 2021; 163: 104108-104128
        • Escalada J.
        • Scully T.
        • Ortin F.
        Improving type information inferred by decompilers with supervised machine learning.
        2021 (arXiv:2101.08116)
        • Ortin F.
        • Escalada J.
        Cnerator: A python application for the controlled stochastic generation of standard c source code.
        SoftwareX. 2021; 15: 100711-100717
        • Escalada J.
        • Ortin F.
        • Scully T.
        An efficient platform for the automatic extraction of patterns in native code.
        Sci. Program. 2017; 2017
        • Rodriguez-Prieto O.
        • Mycroft A.
        • Ortin F.
        An efficient and scalable platform for java source code analysis using overlaid graph representations.
        IEEE Access. 2020; 8: 72239-72260
        • Ortin F.
        • Lopez B.
        • García Perez-Schofield J.B.
        Separating adaptable persistence attributes through computational reflection.
        IEEE Softw. 2004; 21
        • Pereira R.H.
        • Perez-Schofield J.B.G.
        • Ortin F.
        Modularizing application and database evolution – an aspect-oriented framework for orthogonal persistence.
        Softw. Pract. Exp. 2017; 47: 193-221
        • García Perez-Schofield J.
        • García Roselló E.
        • Ortin F.
        • Pérez Cota M.
        Visual zero: A persistent and interactive object-oriented programming environment.
        J. Vis. Lang. Comput. 2008; 19