Category: Bez kategorii

  • What is the Big O notation? A little bit of theory.

    What is the Big O notation? A little bit of theory.

    I will give you a small portion of a theory today. Todays article will get you familiar with the concept of Big O notation. Let’s see the very basics.

    This a very important concept in computer science. It shows how much runtime changes with the number of elements. Big O notation describes how the runtime scales and showing the time complexity of the algorithm (method/function).

    It’s a term that is commonly used during job interviews so if you want to crack it you should be familiar with it

    Writing a function or even a whole functionality that consist of building blocks like algorithms implemented by methods we strive to have them as optimal as we can. I mean that would be in an ideal world but I as a programmer usually do that. We can calculate the time complexity just by analyzing our code, the flow and number of elements that flows through it. Not literally of course.

    Big O notation is written: like O(n) – where n is the size of the problem. What it means exactly? Check this out. Let’s say we have a loop:

            int[] arr = new int[690];
            for (int i = 0; i < arr.length; i++) {
                arr[i] += i+2;
                System.out.println("element: " + i);
            }

    What’s the Big O for that piece of code? The runtime increases with the number of elements. The loop iterates through the array once. So the relationship is linear. The Big O is: O(n) n elements of the array. In this case we could write it as O(i) – as we have a variable i in our code.

            int[] arr = new int[690];
            int[] arr_two = new int[100];
            for (int i = 0; i < arr.length; i++) {
                arr[i] += i+2;
                System.out.println("element: " + i);
            }
            for (int j = 0; j < arr_two.length; j++) {
                System.out.println("element: " + j);
            }

    I have added another for loop to my chunk of code. What is the Big O? We have two for loops that iterates over two different arrays with different sizes. The Big O is: O(i+j) – we need to add the sizes of our element sets.

        for (int i = 0; i < arr.length; i++) {
                System.out.println("element: " + i);
                for (int j = 0; j < arr_two.length; j++) {
                    System.out.println("element: " + j);
                    if (i == j) {
                        System.out.println("Nuber from first array " + arr[i] + " and from the second one " + arr_two[j] + " are equal!");
                    }
                }
            }

    What about that? We have nested for loops. Each one iterates through different arrays with different sizes. The Big O is: O(i*j) – we multiply the sizes of the given set of elements. Very common mistake is to write it as O(n^2). Remember that the variable in O function is just a variable. It can be any letter we need. In this case the numer of elements for each array is represented by i and y therefore I have used those letters.

    Best, average, worst case scenarios

    The complexity calculated by Big O may differ depending on the input we put into the algorithm. We differentiate between the best and worst scenarios. In this article I’m talking about worst/average case scenarios. When printing out all elements from two arrays it’s always O(n^2) no matter what. But when searching or sorting we can have some certain sets of elements that reach the best or the worst boundary.

    For instance, when searching using linear search the first element on the list is the one we’re looking for it will be faster than binary search that has usually satysfying complexity of O(log n).

    That’s the basics. Let’s check some real life examples. We’ll try to estimate the Big O for some well known searching and sorting algorithms!

    Bubble sort:

    void bubbleSort(int arr[]) 
        { 
            int n = arr.length; 
            for (int i = 0; i < n-1; i++) 
                for (int j = 0; j < n-i-1; j++) 
                    if (arr[j] > arr[j+1]) 
                    { 
                        // swap arr[j+1] and arr[i] 
                        int temp = arr[j]; 
                        arr[j] = arr[j+1]; 
                        arr[j+1] = temp; 
                    } 
        } 

    We have two nested for loops. They sort elements within the array which has n elements (or i elements) – therefore we have O(n^2) here.

    Binary search:

    Binary search algorithm works on a sorted array. I provides O(log n) at average – that’s a really good performance here. It works in a way that by each iteration it cuts in half the array so it has less and less elements to iterate through. Firstly it has n/2 elements, then n/4, n/8, n/16 and so on. The time complexity decreases logarithmicly.

     private static int binarySearch0(long[] a, int fromIndex, int toIndex,
                                         long key) {
            int low = fromIndex;
            int high = toIndex - 1;
            while (low <= high) {
                int mid = (low + high) >>> 1;
                long midVal = a[mid];
                if (midVal < key)
                    low = mid + 1;
                else if (midVal > key)
                    high = mid - 1;
                else
                    return mid; // key found
            }
            return -(low + 1);  // key not found.
        }

    This code is pasted from Jdk 11 Arrays class implementation. In the first parameter we give the array, in second the starting index (if we need to search throug entire array we pass 0), the endinx index (we should pass array length if we need to search through entire array), and the element we are looking whole. I advise you analyze this code as the algorithm is really cool and may commonly used (you may encounter it on your job interview!).

    That’s it when it comes to basics of Big O notation. You’ve got familiar with the basic concepts and some real life examples.

    Should you have any questions, don’t hesitate to reach me via linkedin, facebook, email or via comments.

  • How to modify a melody using Python? Part I

    How to modify a melody using Python? Part I

    Starting with this article I am introducing a series on an original (at least for me and this blog) topic. Firstly because of the programming language that I’m using to show my idea and secondly because of the subject – melody modification, or to be more precise, sound processing using Python language.

    I have written a thesis on this subject in my native language (polish) already, therefore I decided to start this series for the worldwide audience to be able to learn about it.

    Theory

    First of all we need to introduce a few key concepts so we can transform them into a working application. We need to understand the domains that are sound and signal processing.

    Sound

    Every sound that we can hear can is really a wave that propagates through the air. It is a change in pressure of air over time.

    Fig. 1, By Rburtonresearch – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=45035734

    Those changes in pressure, amplitudes can be measured and saved as a file to the computer (as a WAVE file for instance). WAVE file contains a clear information about the amplitudes.

    Signal processing

    FT & FFT

    Ok, we have skimmed the topic already but how to find what frequency we’re hearing at a particular moment in time? We need to use a brilliant method called Fourier Transform (FT) or to be more precise its computer-suited algorithm version called Fast Fourier Transform (FFT). A naive version of this algorithm (FT) is really ineffective and would take too much time to process sound data so that’s where FFT comes in. The history of FT and FFT is a great idea for a different article, though.

    Thanks to wiki we can vizualize the idea of FT on charts.

    File:FFT of Cosine Summation Function.svg
source: https://en.wikipedia.org/wiki/File:FFT_of_Cosine_Summation_Function.svg
    Fig 2. Function and its

    The first figure shows well already known amplitude over time chart for function:

    The latter presents the result of the Fourier Transform operator (mathematically FT speaking it’s a linear operator) applied to the sample input shown in the first chart. As a result we get frequencies that make up the input wave and their power in dB.

    STFT

    We have frequencies of the entire sample but we still don’t know what frequency (note) the sound has at a certain point in time. For that problem, we have to use an another briliant method with a cool acronym Short Time Fourier Transform (STFT). STFT allows us to calculate the frequency and its power in dB at a certain point in time.

    A note, for instance a C7 sound, is a sound wave that has the freqeuency of 2093.00 Hz. Table available at https://pages.mtu.edu/~suits/notefreqs.html shows freqiencies for most of the sound notes that a human ear can hear.

    The conclusion is that having a value of frequency and its power at a certain point in time allows us to reproduce the melody of the entire sample!

    Fig. 3, STFT result on human speech sample – Spectral Analysis

    The figure above depicts the result of STFT applied to the sound sample – a spectral analysis. (it’s a different sample, not the same wave as in two previous figures). The green line, labeled as F0, is a chart of the melody computed by the pYIN algorithm. By modyfing the frequencies at certain points in time we can change the melody. Then we just need to use iSTFT (reverse opertion) to acquire modified sound.

    Fundamental frequency

    One last piece of theory. What is the fundamental frequency labeled as F0?

    After wikipedia “the fundamental is the musical pitch of a note that is perceived as the lowest partial present.” Pitch can me mapped to frequency in our terms. Looking at the Fig. 3 we can see the brithest area spanning from around 0.06 to 0.36 s. – the same that has been marked with blue line. This is the lowest frequency of all harmonics at certain point in time. All the higher harmonics (F1, F2) can also be seen in the figure as the brighter stripes (higher power in dB) in higher frequencies.

    Implementation

    That’s a lot of theory going on behind this concept. Luckily, we have a wonderful Python library that does the math for us so we can just focus on our idea instead of implementing it by hand (that would be fun too though). librosa is a rich library for music and audio analysis. We just need to get Python, install it, then get librosa and install it too. I am using JetBrains’ PyCharm which makes it quite easy to do but you are free to choose your favorite IDE.

    PyCharm

    Once you have acquired PyCharm go to File -> Settings.

    Then choose Python interpreter and click on a ‘plus’ sign. Type in ‘librosa’ and choose the latest version (0.8.0 at the time of writing this). Click Install package, then OK, wait for it to be installed and you are good to go.

    Another libs that should be installed for showing a plot are ‘matplotlib‘ and ‘numpy‘. Repeat the steps above and install the newest version.

    Testing librosa

    Create a new .py file and write the code below.

    import librosa
    from librosa import display
    import matplotlib.pyplot as plt
    import numpy as np
    
    fig, ax = plt.subplots()
    
    y, sr = librosa.load(librosa.ex('trumpet'))
    stft_absolute_values = np.abs(librosa.stft(y))
    img = display.specshow(librosa.amplitude_to_db(stft_absolute_values, ref=np.max), y_axis='log', x_axis='time', ax=ax)
    
    ax.set_title('Power spectrogram')
    
    fig.colorbar(img, ax=ax, format="%+2.0f dB")
    plt.show()

    Run the code and voila! We have quickly transformed sound waves to frequencies and created a really neat diagram using Python and librosa. Good job.

    This is where I stop. In the next part I am going to show you an actual implementation of melody modification code, samples conversion using STFT and vice versa. Explain thoroughly how to read, understand end modify the results of Fourier Transformation. Hope to see you there!

    Should you any questions or remarks, feel free to reach me!

  • How can good Unit Tests save your Dev’s well-being?

    How can good Unit Tests save your Dev’s well-being?

    Imagine you have to implement some business functionality to the application you’re responsible for. Sounds familiar? It may be just a simple thing like adding timestamp with date zone offset to the db entity. What should be done after a successful implementation of aforementioned feature? Unit Tests should be written, of course.

    A common mistake, made by beginners especially (by experienced devs as well, of course), is writing unit tests that test methods and internal logic of the classes. It covers implementation details instead of core business features. It may cause us to miss some not-so-obvious edge cases because we try to write tests that are aligned to our implementation, not to be compliant to the user story or specification. A well written specification (tests in this case) helps us to fight that problem.

    BDD Behaviour Driven Development has been brought up to life to save us from that kind of unpleasant results. It puts an emphasis on testing real functionalities before testing the dry code of ours (details of implementation).Here is anexample visualizing this error-generating peril.

    This app is a little service (could be a part of some micro-service system) which is responsible for receiving events (eg. from service bus) and create or modify user data. Sounds simple but beware, there’s always a catch.

    Project structure looks like this. It’s created in Gradle. Click here to find out how to manage Gradle projects.

    Figure 1. Project structure visualized by IntelliJ

    build.gradle script contains info about dependencies and other project properties.

    build.gradle (snippet)

    (...)
    
    dependencies {
    
    compile 'org.mockito:mockito-core:2.21.0'
    compile 'com.fasterxml.jackson.core:jackson-databind:2.9.8'
    compileOnly 'org.projectlombok:lombok'
    implementation 'org.slf4j:slf4j-api:1.7.30'
    annotationProcessor 'org.projectlombok:lombok'
    testImplementation 'junit:junit'
    
    }
    
    (...)

    Now I am going to explain briefly a responsibility of each class.

    Model classes hold information about our domain. The first one is entity-like class that contains info about our AppUser. That’s pretty straightforward.

    AppUser.class

    @Setter
    @Getter
    @NoArgsConstructor
    @AllArgsConstructor
    @Builder
    public class AppUser {
        private Long id;
        private String name;
        private List<String> assignedApps;
        private OffsetDateTime userLocalCreatedAt;
        private OffsetDateTime userLocalUpdatedAt;
        public AppUser(Long id, String name, List<String> assignedApps, OffsetDateTime timestamp) {
            this.id = id;
            this.name = name;
            this.assignedApps = assignedApps;
            this.userLocalCreatedAt = timestamp;
            this.userLocalUpdatedAt = timestamp;
        }
    }

    Another model class hold data for events received from outside (eg. aforementioned bus).

    AppUserChangeEvent.class

    @Getter
    @AllArgsConstructor
    @NoArgsConstructor
    @Setter
    @Builder
    public class AppUserChangeEvent {
        private Long id;
        private String name;
        private List<String> assignedApps;
        private OffsetDateTime timestamp;
    }

    AdminListener class is a pseudo message listener. I will imitate receive method invocation just like there was some queue/bus attached to it. It is responsible for intercepting event, parsing it using ObjectMapper and sending to UserAdminProcessor for further processing.

    AdminListener.class

    public class AdminListener {
        private final UserAdminProcessor userAdminProcessor;
        private final ObjectMapper objectMapper;
        public AdminListener(UserAdminProcessor processor, ObjectMapper objectMapper) {
            this.userAdminProcessor = processor;
            this.objectMapper = objectMapper;
        }
        @SneakyThrows
        public void receive(String message) {
            try {
                var appUser = objectMapper.readValue(message, AppUserChangeEvent.class);
                userAdminProcessor.process(appUser);
            } catch (IOException exc) {
                System.out.println("Cannot read message");
                throw exc;
            }
        }
    }

    UserAdminProcessor contains business logic code which ‘decides’ what to do with the event it has received. AppUser can be either modified or created, depending on its existence in the database.

    UserAdminProcessor.class

    public class UserAdminProcessor {
        private final InMemoryAppUserRepo repository;
        public UserAdminProcessor(InMemoryAppUserRepo repository) {
            this.repository = repository;
        }
        void process(AppUserChangeEvent event) {
            var user = repository.findUser(event.getId());
            if (user != null) {
                modifyUser(event, user);
            } else {
                createUser(event);
            }
        }
        private void modifyUser(AppUserChangeEvent event, AppUser user) {
            user.getAssignedApps().addAll(event.getAssignedApps());
            user.setName(event.getName());
            user.setUserLocalUpdatedAt(event.getTimestamp());
            repository.save(user);
            System.out.println("## User " + user.getId() + " has been modified.");
        }
        private void createUser(AppUserChangeEvent event) {
            var user = new AppUser(event.getId(), event.getName(), event.getAssignedApps(), event.getTimestamp());
            repository.save(user);
            System.out.println("## User " + user.getId() + " has been modified.");
        }
    }

    The pseudo-repository InMemoryAppUserRepo holds a standards HashMap and some convenience methods for saving and retrieving objects from the map and therefore imitates real db repository which we know very well from Spring Data or DAO pattern.

    InMemoryAppUserRepo.class

    public class InMemoryAppUserRepo {
        private final Map<Long, AppUser> store = new HashMap<>();
        public void save(AppUser entity) {
            putOrReplace(entity);
            System.out.println("### Entity '" + entity + "' saved!");
        }
        public AppUser findUser(Long id) {
            return store.getOrDefault(id, null);
        }
        private void putOrReplace(AppUser entity) {
            if (findUser(entity.getId()) == null) {
                store.put(entity.getId(), entity);
            } else {
                store.replace(entity.getId(), entity);
            }
        }
    }

    That’t it for the classes description. How does the data flow works and look like though? The figure should answer that question.

    Now let’s run our unit tests!

    I am using junit unit test framework to run this thing up.

    Common and not-so-correct ‘tradition’ is testing each class separately and treating each method as a ‘unit’ in question. As I mentioned in the introduction, it may cause us to miss some crucial bugs. In the test class below I’m presenting such an example. Test are passing smoothly. But are we sure our code is really correct?

    NotEntirelyCorrectUnitTesting.class

    @RunWith(MockitoJUnitRunner.class)
    public class NotEntirelyCorrectUnitTestingTest {
    
    
        @Mock
        private InMemoryAppUserRepo repo;
    
        @InjectMocks
        private UserAdminProcessor processor;
    
        @Captor
        private ArgumentCaptor<AppUser> appUserArgumentCaptor;
    
    
        @Test
        public void userPropertiesAreSuccessfullyUpdated() {
            given(repo.findUser(9L))
                    .willReturn(AppUser.builder()
                            .id(9L)
                            .name("TEST_USER")
                            .assignedApps(new ArrayList<>(Arrays.asList("GameFactor", "AdobeXD")))
                            .userLocalCreatedAt(OffsetDateTime.parse("2020-08-22T11:26:09.00023+01:00"))
                            .userLocalUpdatedAt(OffsetDateTime.parse("2020-09-23T23:21:34.45292+01:00"))
                            .build());
    
    
            //when listener receives a change event
            processor.process(AppUserChangeEvent.builder()
                    .id(9L)
                    .name("TEST_USER")
                    .assignedApps(Arrays.asList("Notepad", "vi"))
                    .timestamp(OffsetDateTime.parse("2020-09-28T18:21:34.45292+01:00"))
                    .build());
    
    
            //then a correctly modified entity should be passed to the repository to be saved
            then(repo)
                    .should()
                    .save(appUserArgumentCaptor.capture());
    
            assertThat(appUserArgumentCaptor.getAllValues())
                    .hasSize(1)
                    .extracting(
                            AppUser::getId,
                            AppUser::getName,
                            user -> user.getAssignedApps().size(),
                            AppUser::getUserLocalCreatedAt,
                            AppUser::getUserLocalUpdatedAt
                    )
                    .containsExactly(
                            tuple(
                                    9L,
                                    "TEST_USER",
                                    4,
                                    OffsetDateTime.parse("2020-08-22T11:26:09.00023+01:00"),
                                    OffsetDateTime.parse("2020-09-28T18:21:34.45292+01:00")
    
                            )
                    );
    
    
        }
    
        @Test
        public void userIsSuccessfullyCreated() {
            given(repo.findUser(1L))
                    .willReturn(null);
    
    
            //when listener receives a change event
            processor.process(AppUserChangeEvent.builder()
                    .id(1L)
                    .name("TEST_USER2")
                    .assignedApps(Arrays.asList("Photoshop", "Eclipse", "Calc"))
                    .timestamp(OffsetDateTime.parse("2020-09-22T18:29:39.23023+01:00"))
                    .build());
    
    
            //then a correctly modified entity should be passed to the repository to be saved
            then(repo)
                    .should()
                    .save(appUserArgumentCaptor.capture());
    
            assertThat(appUserArgumentCaptor.getAllValues())
                    .hasSize(1)
                    .extracting(
                            AppUser::getId,
                            AppUser::getName,
                            AppUser::getAssignedApps,
                            AppUser::getUserLocalCreatedAt,
                            AppUser::getUserLocalUpdatedAt
                    )
                    .containsExactly(
                            tuple(
                                    1L,
                                    "TEST_USER2",
                                    Arrays.asList("Photoshop", "Eclipse", "Calc"),
                                    OffsetDateTime.parse("2020-09-22T18:29:39.23023+01:00"),
                                    OffsetDateTime.parse("2020-09-22T18:29:39.23023+01:00")
                            )
                    );
        }
    }

    So how should it be?

    UserAdministration.class

    @RunWith(MockitoJUnitRunner.class)
    public class UserAdministrationTest {
    
    
        private AdminListener listener;
    
        @Mock
        private InMemoryAppUserRepo repo;
    
        @InjectMocks
        private UserAdminProcessor processor;
    
        @Captor
        private ArgumentCaptor<AppUser> appUserArgumentCaptor;
    
        @Before
        public void setup() {
            listener = new AdminListener(processor, new ObjectMapper()
                    .findAndRegisterModules());
        }
    
        @Test
        public void userPropertiesAreSuccessfullyUpdated() {
            given(repo.findUser(9L))
                    .willReturn(AppUser.builder()
                            .id(9L)
                            .name("TEST_USER")
                            .assignedApps(new ArrayList<>(Arrays.asList("GameFactor", "AdobeXD")))
                            .userLocalCreatedAt(OffsetDateTime.parse("2020-08-22T11:26:09.00023+01:00"))
                            .userLocalUpdatedAt(OffsetDateTime.parse("2020-09-21T18:21:34.45292+01:00"))
                            .build());
    
    
            //when listener receives a change event
            listener.receive("{" +
                    "  \"id\": \"9\"," +
                    "  \"name\": \"TEST_USER\"," +
                    "  \"assignedApps\": [" +
                    "    \"Notepad\"," +
                    "    \"MediaCoder\"" +
                    "  ]," +
                    "  \"timestamp\": \"2020-10-09T12:56:09.90023+01:00\"" +
                    "}");
    
    
            //then a correctly modified entity should be passed to the repository to be saved
            then(repo)
                    .should()
                    .save(appUserArgumentCaptor.capture());
    
            assertThat(appUserArgumentCaptor.getAllValues())
                    .hasSize(1)
                    .extracting(
                            AppUser::getId,
                            AppUser::getName,
                            s -> s.getAssignedApps().size(),
                            AppUser::getUserLocalCreatedAt,
                            AppUser::getUserLocalUpdatedAt
                    )
                    .containsExactly(
                            tuple(
                                    9L,
                                    "TEST_USER",
                                    4,
                                    OffsetDateTime.parse("2020-08-22T11:26:09.00023+01:00"),
                                    OffsetDateTime.parse("2020-10-09T12:56:09.90023+01:00")
    
                            )
                    );
        }
    
        @Test
        public void userIsSuccessfullyCreated() {
            given(repo.findUser(1L))
                    .willReturn(null);
    
    
            //when listener receives a change event
            listener.receive("{" +
                    "  \"id\": \"1\"," +
                    "  \"name\": \"TEST_USER2\"," +
                    "  \"assignedApps\": [" +
                    "    \"Photoshop\"," +
                    "    \"Eclipse\"," +
                    "    \"Calc\"" +
                    "  ]," +
                    "  \"timestamp\": \"2020-09-22T18:29:39.23023+01:00\"" +
                    "}");
    
    
            //then a correctly modified entity should be passed to the repository to be saved
            then(repo)
                    .should()
                    .save(appUserArgumentCaptor.capture());
    
            assertThat(appUserArgumentCaptor.getAllValues())
                    .hasSize(1)
                    .extracting(
                            AppUser::getId,
                            AppUser::getName,
                            AppUser::getAssignedApps,
                            AppUser::getUserLocalCreatedAt,
                            AppUser::getUserLocalUpdatedAt
                    )
                    .containsExactly(
                            tuple(
                                    1L,
                                    "TEST_USER2",
                                    Arrays.asList("Photoshop", "Eclipse", "Calc"),
                                    OffsetDateTime.parse("2020-09-22T18:29:39.23023+01:00"),
                                    OffsetDateTime.parse("2020-09-22T18:29:39.23023+01:00")
                            )
                    );
        }
    }

    As we can see I am testing an entire flow for two separate cases: there’s a user already so modify properties and the other: there’s no user so create one. But in this case the test cases don’t pass. Why?

    Take a look at the timestamps that re send as a message to the listener. It is parsed by the ObjectMapper which converts the date-time to be adjusted to local server time (eg. 11:09:00.33+01:00 is converted to 10:09:00.33Z – UTC time). It shouldn’t happen because we’re using OffsetDateTime and we want to have the exact date that has been sent to our service – that’s why the event class contains timestamp field at all. We need user’s local time with relative to us (our server), not UTC for that user. That kind of issue could be easily detected by the feature driven tests. So all we need to do is to change ObjectMapper configuration so it doesn’t adjust any temporal values.

       @Before
        public void setup() {
            listener = new AdminListener(processor, new ObjectMapper()
                    .configure(DeserializationFeature.ADJUST_DATES_TO_CONTEXT_TIME_ZONE, false)
                    .findAndRegisterModules());
        }

    Now all the tests pass and we have succesfully detected a potentially hamrful bug.

    Full code is available at my github repo https://github.com/rocketzki/bdd-example

    I have showed you a case study concerning the importance of functionality testing. Additionally, you did have a chance to code a simple micro-service which could be serving a real business case (after adding a real framework support of course :)).

    Hope you liked it.

    In case of any questions and/or remarks don’t hesitate commenting or reaching me out!

  • Striking the Right Balance – a good PR Review

    Striking the Right Balance – a good PR Review

    Short and vague answer is: we need to balance thouroughness and practicality.

    What is the point of pull request? First of all, it helps with maintainq quality of our code. Secondly, it is a great (one of the best if carried out well) educational tool as we can share our knowledge across our team in practice.

    However, the question often arises: how picky should we be when conducting a pull request review? Striking the right balance between thoroughness and efficiency is key. While it’s important to catch and address issues that may lead to bugs or performance problems, being excessively picky can slow down the development process and demoralize contributors.

    In a pull request review, it’s essential to focus on the most critical aspects of the code, such as functionality, security, and maintainability. While small formatting or stylistic issues are worth mentioning, it’s crucial not to get bogged down in nitpicking.

    Open communication and collaboration play a significant role in finding the right balance in pull request reviews. It’s essential to maintain a constructive and respectful tone when providing feedback. In the end, the goal of pull request reviews should be to improve the codebase collectively while respecting the time and effort of everyone involved in the development process.

    Striking the right balance in pull request reviews also involves considering the context of the change. Is it a critical bug fix, a new feature, or a minor enhancement? Understanding the significance of the change can help reviewers gauge the level of scrutiny required. Critical changes may necessitate a more rigorous review, while minor updates may benefit from a lighter touch.

    Summary

    Reviewers should focus on the most critical aspects of the code while avoiding excessive nitpicking.

    Open communication and collaboration, as well as an understanding of the context of the change, can help strike the right balance.

    Ultimately, the goal is to improve the codebase, ensure code quality, and facilitate efficient development while respecting the efforts of all team members.

    • Excessive Style Guide Adherence: Imagine a scenario where a developer has submitted a pull request that introduces a critical bug fix. However, the reviewer becomes overly fixated on minor formatting details, such as the placement of whitespace or the use of single vs. double quotes for strings. While adherence to coding style guides is essential, focusing on such minutiae in a critical bug fix can be counterproductive and frustrating for the developer.
    • Endless Refactoring Requests: In another case, a reviewer might continuously request refactoring of code that is already functional and efficient. They might suggest rewriting entire sections of code to follow a specific design pattern, even if the existing implementation is perfectly suitable. This level of pickiness can lead to unnecessary delays and create a sense of frustration among developers who feel their work is constantly being questioned.

    Now, let’s consider an example of a good pull request review:

    Balanced Feedback on Functionality and Security: In this instance, a developer has submitted a pull request for a new feature. The reviewer thoroughly examines the code, providing constructive feedback on potential security vulnerabilities, suggesting improvements to the user interface for better usability, and pointing out a few areas where code clarity could be enhanced. The feedback is clear and well-documented, and it prioritizes the critical aspects of the code that need attention, ensuring that the final result is not only functional but also secure and user-friendly. This type of review strikes a balance between thoroughness and efficiency, contributing positively to the project’s overall quality.