Abhishek Shenoy
Abhishek Shenoy

Reputation: 95

Persistence and Split brains Scenario

1.How does ignite handle split brain scenario in clustered mode ?

2.Incase of putAll, does it hit the persistent store for each entry or everything is put into store at once ?

3.How does putAll work with regard to a persistent store if we set batch size ?

4.In case of partitioned with a backup , what is the order in which the data moves ? primary->backup->persistence or primary->backup meantime asynchronously into persistence ?

5.If an update is done in the persistence store , what has to be done to reflect it in the cache without reloading?(How to handle backend updates)

6.On doing an update in the backend and to reflect the changes in the cache if we reload the cache using loadCache, the changes are not updated in the cache or if we straightway use get() also the updates are not reflected. The updates are reflected only after clearing the cache once and then calling the loadcache or get api . Is this the right way to reload a cache?

 Person p1 = new Person(1, "Benakaraj", "KS", 11, 26, 1000);
    Person p2 = new Person(2, "Ashwin", "Konale", 13, 26, 10000);
    Connection con = null;
    Statement stmt = null;

    con = ds.getConnection();
    stmt = con.createStatement();
    String sql =
        "create table Person(per_id int,name varchar(20),last_name varchar(20),org_id int,age int,salary REAL,primary key(per_id))";
    stmt.executeUpdate(sql);

    ROCCacheConfiguration<Integer, Person> pesonConfig = new ROCCacheConfiguration<>();
    pesonConfig.setName("bkendupdtCache");
    pesonConfig.setCacheMode(CacheMode.PARTITIONED);
    JdbcType jdbcType = new JdbcType();

    jdbcType.setCacheName("bkendupdtCache");
    jdbcType.setDatabaseSchema("ROC4Test");
    jdbcType.setDatabaseTable("Person");
    jdbcType.setKeyType(Integer.class);
    jdbcType.setValueType(Person.class);
    // Key fields for PERSON.

    Collection<JdbcTypeField> keys = new ArrayList<>();
    keys.add(new JdbcTypeField(Types.INTEGER, "per_id", int.class, "perId"));
    jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));

    // Value fields for PERSON.
    Collection<JdbcTypeField> vals = new ArrayList<>();
    vals.add(new JdbcTypeField(Types.INTEGER, "per_id", int.class, "perId"));
    vals.add(new JdbcTypeField(Types.VARCHAR, "name", String.class, "name"));
    vals.add(new JdbcTypeField(Types.VARCHAR, "last_name", String.class, "lastName"));
    vals.add(new JdbcTypeField(Types.INTEGER, "org_id", int.class, "orgId"));
    vals.add(new JdbcTypeField(Types.INTEGER, "age", int.class, "age"));
    vals.add(new JdbcTypeField(Types.FLOAT, "salary", Float.class, "salary"));
    jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));

    Collection<JdbcType> jdbcTypes = new ArrayList<>();

    jdbcTypes.add(jdbcType);

    CacheJdbcPojoStoreFactory<Integer, Organization> cacheJdbcdPojoStorefactory4 =
        context.getBean(CacheJdbcPojoStoreFactory.class);
    cacheJdbcdPojoStorefactory4.setTypes(jdbcTypes.toArray(new JdbcType[jdbcTypes.size()]));

    pesonConfig.setCacheStoreFactory((Factory<? extends CacheStore<Integer, Person>>) cacheJdbcdPojoStorefactory4);
    pesonConfig.setReadThrough(true);
    pesonConfig.setWriteThrough(true);
    ROCCache<Integer, Person> personCache2 = rocCachemanager.createCache(pesonConfig);
    personCache2.put(1, p1);
    personCache2.put(2, p2);
    assertEquals(personCache2.get(2).getName(), "Ashwin");
    sql = assertEquals(personCache2.get(2).getName(), "Abhi");

"update Person set name='Abhi' where per_id=2";
        stmt.execute(sql);

        //fails and asks for assertion with the stale value
        personCache.loadcache(null);
        assertEquals(personCache2.get(2).getName(), "Abhi");

        //works fine
        personCache2.clear(2);
        assertEquals(personCache2.get(2).getName(), "Abhi");

        //works fine
        personCache2.clear();
        personCache2.loadcache(null);
        assertEquals(personCache2.get(2).getName(), "Abhi");

        sql = "drop table Person";
        stmt.executeUpdate(sql);
        con.close();
        stmt.close();
        rocCachemanager.destroyCache("bkendupdtCache");

Upvotes: 1

Views: 1435

Answers (1)

Valentin Kulichenko
Valentin Kulichenko

Reputation: 8390

  1. By default you will get two independent clusters that will never join each other again (otherwise data inconsistency is possible). You will have to manually stop one of the clusters and restart after network is restored. However, automatic resolution can be implemented as plugin. E.g., GridGain provides this functionality out of the box: https://gridgain.readme.io/docs/network-segmentation

  2. Ignites tries to minimize persistence store invocations as much as possible. If your storage supports batch reads and writes, it's a good idea to take advantage of this when implementing loadAll, writeAll and removeAll methods.

  3. Batch update operation will split the batch based on node mappings. Each part of the batch will be persisted at once on corresponding primary node.

  4. Store is updated atomically with the primary node (if write to store fails, cache is not updated, and visa versa). Backups are updated asynchronously in background by default.

  5. If possible, you should avoid this and treat Ignite as a primary data storage with an optional store at backend (i.e. always access the data through Ignite API). There is no easy way to propagate DB updates to Ignite.

  6. You can invalidate entries using clear/clearAll method, or reload them using loadAll method. Another option is to use expirations: https://apacheignite.readme.io/docs/expiry-policies

Upvotes: 1

Related Questions