About me

I’m Giovanni merlos Mellini and this is my personal blog.

I’m the founder and president of Cyber Saiyan a no profit organization I founded together with other friends in December 2017.

Since 2018 we organize RomHack a cyber security conference held yearly in Rome.

I occasionally speak on public community events, schools and universities:

I had my Internet glory days when I hacked a BT Low Energy (BLE) butt plug.

Sometimes I write about open source, security and boring stuff on this blog.

You can send me a secure email to giovanni [dot] mellini [at] protonmail [dot] com using my public PGP signature available here.

7 thoughts on “About me

  1. Hi Giovanni,
    I have been working through your document on integrating MineMeld with Splunk. We would like to do the exact same thing here. So far, I have not run into any issues, but I do have a question. On the 4th post, you have this:

    From prototypes page, clone the stdlib.localLogStash prototype to a new one minemeldlocal.LOG-TO-SPLUNK. While cloning change the 2 prototype parameters as follow:

    logstash_host: ;
    logstash_port: 1534 (or any port where Splunk will listen for MineMeld data).

    When you say , do you mean the IP address of an indexer or a search head? I am assuming that it is a search head (we have 3 here), is that correct?

    Sincerely,
    Jon

    Like

    1. Hi Jon
      good to ear you did the integration 🙂
      > When you say , do you mean the IP address of an indexer or a search head? I am assuming that it is a search head (we have 3 here), is that correct?
      It depends from your architecture.
      In my case I send the data from the output node to an HA Heavy forwarders cluster that I use to forward logs to Indexers cluster in case I cannot install Splunk agents (eg. Firewall or Minemeld output node).
      I need to send Minemeld logs in a reliable way, like the splunk agent does on the remote clients/servers while load balancing indexers.
      So the IP logstash_host is the Heavy Forwarders VIP made on top of a DRBD cluster of 2 heavy forwarders.
      To the heavy forwarders I push a small application (from my deployment server) that just sends data received on the tcp port to the Indexer cluster
      etc/apps/forw_portsinput/default/inputs.conf
      [tcp://:1534]
      sourcetype=minemeld_ioc
      index=minemeld_ioc

      etc/apps/Hforw-conf/default/outputs.conf
      [tcpout]
      defaultGroup = default-autolb-group
      #indexers
      [tcpout:default-autolb-group]
      server=INDEXER1:9997,INDEXER2:9997
      autoLB = true

      So you have a reliable and HA architecture.
      I don’t send any log directly to the Search Heads because you need to index the data in the indexer cluster so any SH can access it with the right authorization.
      Hope is clear
      Giovanni

      Like

      1. Hi Giovanni,
        I see… we have a slightly different environment. We have 3 non-clustered search heads, 3 clustered indexers, and 1 cluster master. We have all of our log sources sending syslog to a RHEL syslog-ng system running a light forwarder and not the heavy forwarder. Syslog-ng is setup to filer the logs so we don’t need to use the heavy forwarder. Thus, I am assuming that I need to set “logstash_host: ;” to the IP address of our syslog-ng server? I would also need to install the the TA on the syslog-ng server as well?

        Thanks Again!
        Jon

        Like

      2. > Thus, I am assuming that I need to set “logstash_host: ;” to the IP address of our syslog-ng server? I would also need to install the the TA on the syslog-ng server as well?
        Yes in this case you need to send the logs to the syslog server but I’m not sure if the TA works here, you need to try 🙂

        Like

Comments are closed.